title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 16. Configuring kernel parameters permanently by using RHEL system roles | Chapter 16. Configuring kernel parameters permanently by using RHEL system roles You can use the kernel_settings RHEL system role to configure kernel parameters on multiple clients simultaneously. Simultaneous configuration has the following advantages: Provides a friendly interface with efficient input setting. Keeps all intended kernel parameters in one place. After you run the kernel_settings role from the control machine, the kernel parameters are applied to the managed systems immediately and persist across reboots. Important Note that RHEL system role delivered over RHEL channels are available to RHEL customers as an RPM package in the default AppStream repository. RHEL system role are also available as a collection to customers with Ansible subscriptions over Ansible Automation Hub. 16.1. Applying selected kernel parameters by using the kernel_settings RHEL system role You can use the kernel_settings RHEL system role to remotely configure various kernel parameters across multiple managed operating systems with persistent effects. For example, you can configure: Transparent hugepages to increase performance by reducing the overhead of managing smaller pages. The largest packet sizes to be transmitted over the network with the loopback interface. Limits on files to be opened simultaneously. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuring kernel settings hosts: managed-node-01.example.com tasks: - name: Configure hugepages, packet size for loopback device, and limits on simultaneously open files. ansible.builtin.include_role: name: rhel-system-roles.kernel_settings vars: kernel_settings_sysctl: - name: fs.file-max value: 400000 - name: kernel.threads-max value: 65536 kernel_settings_sysfs: - name: /sys/class/net/lo/mtu value: 65000 kernel_settings_transparent_hugepages: madvise kernel_settings_reboot_ok: true The settings specified in the example playbook include the following: kernel_settings_sysfs: <list_of_sysctl_settings> A YAML list of sysctl settings and the values you want to assign to these settings. kernel_settings_transparent_hugepages: <value> Controls the memory subsystem Transparent Huge Pages (THP) setting. You can disable THP support ( never ), enable it system wide ( always ) or inside MAD_HUGEPAGE regions ( madvise ). kernel_settings_reboot_ok: <true|false> The default is false . If set to true , the system role will determine if a reboot of the managed host is necessary for the requested changes to take effect and reboot it. If set to false , the role will return the variable kernel_settings_reboot_required with a value of true , indicating that a reboot is required. In this case, a user must reboot the managed node manually. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.kdump/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify the affected kernel parameters: Additional resources /usr/share/ansible/roles/rhel-system-roles.kernel_settings/README.md file /usr/share/doc/rhel-system-roles/kernel_settings/ directory | [
"--- - name: Configuring kernel settings hosts: managed-node-01.example.com tasks: - name: Configure hugepages, packet size for loopback device, and limits on simultaneously open files. ansible.builtin.include_role: name: rhel-system-roles.kernel_settings vars: kernel_settings_sysctl: - name: fs.file-max value: 400000 - name: kernel.threads-max value: 65536 kernel_settings_sysfs: - name: /sys/class/net/lo/mtu value: 65000 kernel_settings_transparent_hugepages: madvise kernel_settings_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'sysctl fs.file-max kernel.threads-max net.ipv6.conf.lo.mtu' ansible managed-node-01.example.com -m command -a 'cat /sys/kernel/mm/transparent_hugepage/enabled'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/configuring-kernel-parameters-permanently-by-using-the-kernel-settings-rhel-system-role_automating-system-administration-by-using-rhel-system-roles |
Hardware Guide | Hardware Guide Red Hat Ceph Storage 7 Hardware selection recommendations for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html-single/hardware_guide/minimum-hardware-recommendations-for-containerized-ceph_hw |
function::uint_arg | function::uint_arg Name function::uint_arg - Return function argument as unsigned int Synopsis Arguments n index of argument to return Description Return the value of argument n as an unsigned int (i.e., a 32-bit integer zero-extended to 64 bits). | [
"uint_arg:long(n:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-uint-arg |
Chapter 2. Image [image.openshift.io/v1] | Chapter 2. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 2.1.1. .dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 2.1.2. .dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 2.1.3. .dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 2.1.4. .dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 2.1.5. .signatures Description Signatures holds all signatures of the image. Type array 2.1.6. .signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 2.1.7. .signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 2.1.8. .signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 2.1.9. .signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 2.1.10. .signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 2.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/images DELETE : delete collection of Image GET : list or watch objects of kind Image POST : create an Image /apis/image.openshift.io/v1/watch/images GET : watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/image.openshift.io/v1/watch/images/{name} GET : watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/image.openshift.io/v1/images HTTP method DELETE Description delete collection of Image Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status_v5 schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Image Table 2.3. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Image schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 2.2.2. /apis/image.openshift.io/v1/watch/images HTTP method GET Description watch individual changes to a list of Image. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/image.openshift.io/v1/images/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the Image HTTP method DELETE Description delete an Image Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status_v5 schema 202 - Accepted Status_v5 schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body Image schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 2.2.4. /apis/image.openshift.io/v1/watch/images/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the Image HTTP method GET Description watch changes to an object of kind Image. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/image-image-openshift-io-v1 |
Chapter 5. Deploy standalone Multicloud Object Gateway | Chapter 5. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices. 5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway After deploying the component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . 5.1.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.1.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) 5.2. Deploy standalone Multicloud Object Gateway using local storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . 5.2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.2.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-standalone-multicloud-object-gateway |
20.3. Adding a Network Device | 20.3. Adding a Network Device Network device driver modules are loaded automatically by udev . You can add a network interface on IBM Z dynamically or persistently. Dynamically Load the device driver Remove the network devices from the list of ignored devices. Create the group device. Configure the device. Set the device online. Persistently Create a configuration script. Activate the interface. The following sections provide basic information for each task of each IBM Z network device driver. Section 20.3.1, "Adding a qeth Device" describes how to add a qeth device to an existing instance of Red Hat Enterprise Linux. Section 20.3.2, "Adding an LCS Device" describes how to add an lcs device to an existing instance of Red Hat Enterprise Linux. 20.3.1. Adding a qeth Device The qeth network device driver supports IBM Z OSA-Express features in QDIO mode, HiperSockets, z/VM guest LAN, and z/VM VSWITCH. The qeth device driver assigns the same interface name for Ethernet and Hipersockets devices: enccw bus_ID . The bus ID is composed of the channel subsystem ID, subchannel set ID, and device number, for example enccw0.0.0a00 . 20.3.1.1. Dynamically Adding a qeth Device To add a qeth device dynamically, follow these steps: Determine whether the qeth device driver modules are loaded. The following example shows loaded qeth modules: If the output of the lsmod command shows that the qeth modules are not loaded, run the modprobe command to load them: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id , write_device_bus_id , data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.f500 , the write_device_bus_id is 0.0.f501 , and the data_device_bus_id is 0.0.f502 : Use the znetconf utility to sense and list candidate configurations for network devices: Select the configuration you want to work with and use znetconf to apply the configuration and to bring the configured group device online as network device. Optionally, you can also pass arguments that are configured on the group device before it is set online: Now you can continue to configure the enccw0.0.f500 network interface. Alternatively, you can use sysfs attributes to set the device online as follows: Create a qeth group device: For example: , verify that the qeth group device was created properly by looking for the read channel: You can optionally set additional parameters and features, depending on the way you are setting up your system and the features you require, such as: portno layer2 portname Bring the device online by writing 1 to the online sysfs attribute: Then verify the state of the device: A return value of 1 indicates that the device is online, while a return value 0 indicates that the device is offline. Find the interface name that was assigned to the device: Now you can continue to configure the enccw0.0.f500 network interface. The following command from the s390utils package shows the most important settings of your qeth device: 20.3.1.2. Dynamically Removing a qeth Device To remove a qeth device, use the znetconf utility. For example: Use the znetconf utility to show you all configured network devices: Select the network device to be removed and run znetconf to set the device offline and ungroup the ccw > group device. Verify the success of the removal: 20.3.1.3. Persistently Adding a qeth Device To make your new qeth device persistent, you need to create the configuration file for your new interface. The network interface configuration files are placed in the /etc/sysconfig/network-scripts/ directory. The network configuration files use the naming convention ifcfg- device , where device is the value found in the if_name file in the qeth group device that was created earlier, for example enccw0.0.09a0 . The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. If a configuration file for another device of the same type already exists, the simplest way is to copy it to the new name and then edit it: To learn IDs of your network devices, use the lsqeth utility: If you do not have a similar device defined, you must create a new file. Use this example of /etc/sysconfig/network-scripts/ifcfg-0.0.09a0 as a template: Edit the new ifcfg-0.0.0600 file as follows: Modify the DEVICE statement to reflect the contents of the if_name file from your ccw group. Modify the IPADDR statement to reflect the IP address of your new interface. Modify the NETMASK statement as needed. If the new interface is to be activated at boot time, then make sure ONBOOT is set to yes . Make sure the SUBCHANNELS statement matches the hardware addresses for your qeth device. Please, note that the IDs must be specified in lowercase. Modify the PORTNAME statement or leave it out if it is not necessary in your environment. You can add any valid sysfs attribute and its value to the OPTIONS parameter. The Red Hat Enterprise Linux installation program currently uses this to configure the layer mode ( layer2 ) and the relative port number ( portno ) of qeth devices. The qeth device driver default for OSA devices is now layer 2 mode. To continue using old ifcfg definitions that rely on the default of layer 3 mode, add layer2=0 to the OPTIONS parameter. /etc/sysconfig/network-scripts/ifcfg-0.0.0600 Changes to an ifcfg file only become effective after rebooting the system or after the dynamic addition of new network device channels by changing the system's I/O configuration (for example, attaching under z/VM). Alternatively, you can trigger the activation of a ifcfg file for network channels which were previously not active yet, by executing the following commands: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id , write_device_bus_id , data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.0600 , the write_device_bus_id is 0.0.0601 , and the data_device_bus_id is 0.0.0602 : To trigger the uevent that activates the change, issue: For example: Check the status of the network device: Now start the new interface: Check the status of the interface: Check the routing for the new interface: Verify your changes by using the ping utility to ping the gateway or another host on the subnet of the new device: If the default route information has changed, you must also update /etc/sysconfig/network accordingly. 20.3.2. Adding an LCS Device The LAN channel station (LCS) device driver supports 1000Base-T Ethernet on the OSA-Express2 and OSA-Express 3 features. The LCS device driver assigns the following interface name for OSA-Express Fast Ethernet and Gigabit Ethernet devices: enccw bus_ID . The bus ID is composed of the channel subsystem ID, subchannel set ID, and device number, for example enccw0.0.0a00 . 20.3.2.1. Dynamically Adding an LCS Device Load the device driver: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id and write_device_bus_id with the two device bus IDs representing a network device. For example: Create the group device: Configure the device. OSA cards can provide up to 16 ports for a single CHPID. By default, the LCS group device uses port 0 . To use a different port, issue a command similar to the following: Replace portno with the port number you want to use. Set the device online: To find out what network device name has been assigned, enter the command: 20.3.2.2. Persistently Adding an LCS Device The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. To add an LCS device persistently, follow these steps: Create a configuration script as file in /etc/sysconfig/network-scripts/ with a name like ifcfg- device , where device is the value found in the if_name file in the qeth group device that was created earlier, for example enccw0.0.09a0 . The file should look similar to the following: Modify the value of PORTNAME to reflect the LCS port number ( portno ) you would like to use. You can add any valid lcs sysfs attribute and its value to the optional OPTIONS parameter. See Section 20.3.1.3, "Persistently Adding a qeth Device" for the syntax. Set the DEVICE parameter as follows: Issue an ifup command to activate the device: Changes to an ifcfg file only become effective after rebooting the system. You can trigger the activation of a ifcfg file for network channels by executing the following commands: Use the cio_ignore utility to remove the LCS device adapter from the list of ignored devices and make it visible to Linux: Replace read_device_bus_id and write_device_bus_id with the device bus IDs of the LCS device. For example: To trigger the uevent that activates the change, issue: For example: 20.3.3. Configuring a IBM Z Network Device for Network Root File System To add a network device that is required to access the root file system, you only have to change the boot options. The boot options can be in a parameter file (see Chapter 21, Parameter and Configuration Files on IBM Z ) or part of a zipl.conf on a DASD or FCP-attached SCSI LUN prepared with the zipl boot loader. There is no need to recreate the initramfs. Dracut , the mkinitrd successor that provides the functionality in the initramfs that in turn replaces initrd , provides a boot parameter to activate network devices on IBM Z early in the boot process: rd.znet= . As input, this parameter takes a comma-separated list of the NETTYPE (qeth, lcs, ctc), two (lcs, ctc) or three (qeth) device bus IDs, and optional additional parameters consisting of key-value pairs corresponding to network device sysfs attributes. This parameter configures and activates the IBM Z network hardware. The configuration of IP addresses and other network specifics works the same as for other platforms. See the dracut documentation for more details. The cio_ignore commands for the network channels are handled transparently on boot. Example boot options for a root file system accessed over the network through NFS: | [
"lsmod | grep qeth qeth_l3 127056 9 qeth_l2 73008 3 ipv6 492872 155ip6t_REJECT,nf_conntrack_ipv6,qeth_l3 qeth 115808 2 qeth_l3,qeth_l2 qdio 68240 1 qeth ccwgroup 12112 2 qeth",
"modprobe qeth",
"cio_ignore -r read_device_bus_id , write_device_bus_id , data_device_bus_id",
"cio_ignore -r 0.0.f500,0.0.f501,0.0.f502",
"znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth",
"znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (enccw0.0.f500)",
"znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (enccw0.0.f500)",
"echo read_device_bus_id , write_device_bus_id , data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group",
"echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group",
"ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500",
"echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name enccw0.0.f500",
"lsqeth enccw0.0.f500 Device name : enccw0.0.f500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none",
"znetconf -c Device IDs Type Card Type CHPID Drv. Name State -------------------------------------------------------------------------------------- 0.0.8036,0.0.8037,0.0.8038 1731/05 HiperSockets FB qeth hsi1 online 0.0.f5f0,0.0.f5f1,0.0.f5f2 1731/01 OSD_1000 76 qeth enccw0.0.09a0 online 0.0.f500,0.0.f501,0.0.f502 1731/01 GuestLAN QDIO 00 qeth enccw0.0.f500 online",
"znetconf -r f500 Remove network device 0.0.f500 (0.0.f500,0.0.f501,0.0.f502)? Warning: this may affect network connectivity! Do you want to continue (y/n)?y Successfully removed device 0.0.f500 (enccw0.0.f500)",
"znetconf -c Device IDs Type Card Type CHPID Drv. Name State -------------------------------------------------------------------------------------- 0.0.8036,0.0.8037,0.0.8038 1731/05 HiperSockets FB qeth hsi1 online 0.0.f5f0,0.0.f5f1,0.0.f5f2 1731/01 OSD_1000 76 qeth enccw0.0.09a0 online",
"cd /etc/sysconfig/network-scripts # cp ifcfg-enccw0.0.09a0 ifcfg-enccw0.0.0600",
"lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enccw0.0.09a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enccw0.0.0600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64",
"IBM QETH DEVICE=enccw0.0.09a0 BOOTPROTO=static IPADDR=10.12.20.136 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.09a0,0.0.09a1,0.0.09a2 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:23:65:1a TYPE=Ethernet",
"IBM QETH DEVICE=enccw0.0.0600 BOOTPROTO=static IPADDR=192.168.70.87 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:b3:84:ef TYPE=Ethernet",
"cio_ignore -r read_device_bus_id , write_device_bus_id , data_device_bus_id",
"cio_ignore -r 0.0.0600,0.0.0601,0.0.0602",
"echo add > /sys/bus/ccw/devices/ read-channel /uevent",
"echo add > /sys/bus/ccw/devices/0.0.0600/uevent",
"lsqeth",
"ifup enccw0.0.0600",
"ip addr show enccw0.0.0600 3: enccw0.0.0600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff inet 10.85.1.245/24 brd 10.34.3.255 scope global dynamic enccw0.0.0600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever",
"ip route default via 10.85.1.245 dev enccw0.0.0600 proto static metric 1024 12.34.4.95/24 dev enp0s25 proto kernel scope link src 12.34.4.201 12.38.4.128 via 12.38.19.254 dev enp0s25 proto dhcp metric 1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1",
"ping -c 1 192.168.70.8 PING 192.168.70.8 (192.168.70.8) 56(84) bytes of data. 64 bytes from 192.168.70.8: icmp_seq=0 ttl=63 time=8.07 ms",
"modprobe lcs",
"cio_ignore -r read_device_bus_id , write_device_bus_id",
"cio_ignore -r 0.0.09a0,0.0.09a1",
"echo read_device_bus_id , write_device_bus_id > /sys/bus/ccwgroup/drivers/lcs/group",
"echo portno > /sys/bus/ccwgroup/drivers/lcs/device_bus_id/portno",
"echo 1 > /sys/bus/ccwgroup/drivers/lcs/read_device_bus_id/online",
"ls -l /sys/bus/ccwgroup/drivers/lcs/ read_device_bus_ID /net/ drwxr-xr-x 4 root root 0 2010-04-22 16:54 enccw0.0.0600",
"IBM LCS DEVICE=enccw0.0.09a0 BOOTPROTO=static IPADDR=10.12.20.136 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=lcs SUBCHANNELS=0.0.09a0,0.0.09a1 PORTNAME=0 OPTIONS='' TYPE=Ethernet",
"DEVICE=enccw bus_ID",
"ifup enccw bus_ID",
"cio_ignore -r read_device_bus_id , write_device_bus_id",
"cio_ignore -r 0.0.09a0,0.0.09a1",
"echo add > /sys/bus/ccw/devices/ read-channel /uevent",
"echo add > /sys/bus/ccw/devices/0.0.09a0/uevent",
"root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:enccw0.0.09a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-post-installation-adding-network-devices-s390 |
Chapter 14. Exporting and importing the Developer Portal | Chapter 14. Exporting and importing the Developer Portal As a 3scale API provider, you can export and import the Developer Portal for the following purposes: Creating backups. Keeping the Developer Portal in an external repository, for example, GitHub. Integrating the Developer Portal with other applications. Use the Developer Portal API as a Content Management System (CMS) to import and export the Development Portal content. For this, perform the following steps to generate a key with enough permissions to use the Developer Portal API. Procedure Navigate to Account settings > Personal > Tokens and click Add Access Token . Name the access token and check Developer Portal API . Choose permissions: Read only allows retrieving Developer Content portal contents. Read and Write allows retrieving and restoring Developer Portal contents. Click Create Access token . Copy and store the token information displayed. Check the endpoints list navigating from the left panel to Integrate > 3scale API Docs . Then, scroll down to Developer Portal API . Use the token generated to call each endpoint and fill in the fields according to your needs. Developer Portal API endpoints considerations The Developer Portal API available from 3scale 2.14 is not compatible with versions. Also, from 3scale 2.14, JSON is the only data format compatible for all requests and responses. For each endpoint, you can perform the following actions: GET to read and list resources. POST to create and add resources. PUT to modify resources. DELETE to delete resources. Note Built-in objects cannot be deleted. Call the GET /admin/api/cms/templates endpoint with the type=builtin_page parameter to get a list of builtin pages and the type=builtin_partial parameter to get a list of builtin partials. To do a complete backup, you must call each content. There is no API endpoint that downloads a complete archive containing all files. If content is not sent, it does not return the published or draft content. Instead, it returns a summary with information like template name and section because the content is too long. Use the details listed under each endpoint to refine its output after executing it. Consider the following for the listed parameters: All endpoints reject unsupported parameters; if unsupported parameters are sent, the request is canceled. GET /admin/api/cms/templates endpoint accepts a content parameter. By default, it returns a list of Developer Portal templates. To also get published and draft content, use content=true parameter. GET /admin/api/cms/templates endpoint accepts type and section_id parameters to filter results. GET /admin/api/cms/sections endpoint accepts parent_id parameter to filter results. GET /admin/api/cms/files endpoint accepts section_id parameter to filter results. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/exporting-and-importing-the-developer-portal |
Chapter 8. DNS [config.openshift.io/v1] | Chapter 8. DNS [config.openshift.io/v1] Description DNS holds cluster-wide information about DNS. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 8.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description baseDomain string baseDomain is the base domain of the cluster. All managed DNS records will be sub-domains of this base. For example, given the base domain openshift.example.com , an API server DNS record may be created for cluster-api.openshift.example.com . Once set, this field cannot be changed. platform object platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. privateZone object privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. publicZone object publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. 8.1.2. .spec.platform Description platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required type Property Type Description aws object aws contains DNS configuration specific to the Amazon Web Services cloud provider. type string type is the underlying infrastructure provider for the cluster. Allowed values: "", "AWS". Individual components may not support all platforms, and must handle unrecognized platforms with best-effort defaults. 8.1.3. .spec.platform.aws Description aws contains DNS configuration specific to the Amazon Web Services cloud provider. Type object Property Type Description privateZoneIAMRole string privateZoneIAMRole contains the ARN of an IAM role that should be assumed when performing operations on the cluster's private hosted zone specified in the cluster DNS config. When left empty, no role should be assumed. 8.1.4. .spec.privateZone Description privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.5. .spec.publicZone Description publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object 8.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/dnses DELETE : delete collection of DNS GET : list objects of kind DNS POST : create a DNS /apis/config.openshift.io/v1/dnses/{name} DELETE : delete a DNS GET : read the specified DNS PATCH : partially update the specified DNS PUT : replace the specified DNS /apis/config.openshift.io/v1/dnses/{name}/status GET : read status of the specified DNS PATCH : partially update status of the specified DNS PUT : replace status of the specified DNS 8.2.1. /apis/config.openshift.io/v1/dnses HTTP method DELETE Description delete collection of DNS Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNS Table 8.2. HTTP responses HTTP code Reponse body 200 - OK DNSList schema 401 - Unauthorized Empty HTTP method POST Description create a DNS Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body DNS schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 202 - Accepted DNS schema 401 - Unauthorized Empty 8.2.2. /apis/config.openshift.io/v1/dnses/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the DNS HTTP method DELETE Description delete a DNS Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNS Table 8.9. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNS Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNS Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body DNS schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty 8.2.3. /apis/config.openshift.io/v1/dnses/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the DNS HTTP method GET Description read status of the specified DNS Table 8.16. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNS Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNS Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body DNS schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/dns-config-openshift-io-v1 |
Chapter 2. Onboarding certification partners | Chapter 2. Onboarding certification partners Use the Red Hat Customer Portal to create a new account if you are a new partner, or use your existing Red Hat account if you are a current partner to onboard with Red Hat for certifying your products. 2.1. Onboarding existing certification partners As an existing partner you could be: A member of the one-to-many EPM program who has some degree of representation on the EPM team, but does not have any assistance with OpenStack certification. OR A member fully managed by the EPM team in the traditional manner with a dedicated EPM team member who is assigned to manage the partner, including questions about OpenStack certification requests. Prerequisites You have an existing Red Hat account. Procedure Access Red Hat Customer Portal and click Log in . Enter your Red Hat login or email address and click . Then, use either of the following options: Log in with company single sign-on Log in with Red Hat account From the menu bar on the header, click your avatar to view the account details. If an account number is associated with your account, then contact the certification team to proceed with the certification process. If an account number is not associated with your account, then first contact the Red Hat global customer service team to raise a request for creating a new account number. After you get an account number, contact the certification team to proceed with the certification process. 2.2. Onboarding new certification partners Creating a new Red Hat account is the first step in onboarding new certification partners. Access Red Hat Customer Portal and click Register . Enter the following details to create a new Red Hat account: Select Corporate in the Account Type field. If you have created a Corporate type account and require an account number, contact the Red Hat global customer service team . Note Ensure that you create a company account and not a personal account. The account created during this step is also used to sign in to the Red Hat Ecosystem Catalog when working with certification requests. Choose a Red Hat login and password. Important If your login ID is associated with multiple accounts, then do not use your contact email as the login ID as this can cause issues during login. Also, you cannot change your login ID once created. Enter your Personal information and Company information . Click Create My Account . A new Red Hat account is created. Contact your Ecosystem Partner Management (EPM) representative, if available. Else contact the certification team to proceed with the certification process. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_bare_metal_hardware_certification_workflow_guide/onboarding-certification-partners_rhosp-bm-wf-introduction |
Chapter 4. Reference | Chapter 4. Reference 4.1. aggregate-realm attributes You can configure aggregate-realm by setting its attributes. Table 4.1. aggregate-realm sttributes Attribute Description authentication-realm Reference to the security realm to use for authentication steps. This is used for obtaining or validating credentials. authorization-realm Reference to the security realm to use for loading the identity for authorization steps. authorization-realms Reference to the security realms to aggregate for loading the identity for authorization steps. If an attribute is defined in more than one authorization realm, the value of the first occurrence of the attribute is used. principal-transformer Reference to a principal transformer to apply between loading the identity for authentication and loading the identity for authorization. Note The authorization-realm and authorization-realms attributes are mutually exclusive. Define only one of the two attributes in a realm. 4.2. caching-realm attributes You can configure caching-realm by setting its attributes. Table 4.2. caching-realm Attributes Attribute Description maximum-age The time in milliseconds that an item can stay in the cache. A value of -1 keeps items indefinitely. This defaults to -1 . maximum-entries The maximum number of entries to keep in the cache. This defaults to 16 . realm A reference to a cacheable security realm such as jdbc-realm , ldap-realm , filesystem-realm or a custom security realm. 4.3. distributed-realm attributes You can configure distributed-realm by setting its attributes. Table 4.3. distributed-realm attributes Attribute Description emit-events Whether a SecurityEvent signifying realm unavailability should be emitted. Applicable only when the ignore-unavailable-realms attribute is set to true . The default value is true . ignore-unavailable-realms In case the connection to any identity store fails, whether subsequent realms should be checked. Set the value to true to check the subsequent realms. The default value is false . When the value is set to true , a SecurityEvent is emitted if the connection to any identity store fails, by default. realms A list of the security realms to search. The security realms are invoked sequentially in the order they are provided in this attribute. 4.4. failover-realm attributes You can configure failover-realm by setting its attributes. Table 4.4. failover-realm attributes Attribute Description delegate-realm The security realm to use by default. emit-events Specifies whether a security event of the type SecurityEvent that signifies the unavailability of a delegate-realm should be emitted. When enabled, you can capture these events in the audit log. The default values is true . failover-realm The security realm to use in case the delegate-realm is unavailable. 4.5. file-audit-log attributes Table 4.5. file-audit-log attributes Attribute Description autoflush Specifies if the output stream requires flushing after every audit event. If you do not define the attribute, the synchronized attribute value is the default. encoding Specifies the audit file encoding. The default is UTF-8 . The possible values are the following: UTF-8 UTF-16BE UTF-16LE UTF-16 US-ASCII ISO-8859-1 format Default value is SIMPLE . Use SIMPLE for human readable text format or JSON for storing individual events in JSON . path Defines the location of the log files. relative-to Optional attribute. Defines the location of the log files. synchronized Default value is true . Specifies that the file descriptor gets synchronized after every audit event. 4.6. http-authentication-factory attributes You can configure http-authentication-factory by setting its attributes. Table 4.6. http-authentication-factory attributes Attribute Description http-server-mechanism-factory The HttpServerAuthenticationMechanismFactory to associate with this resource. mechanism-configurations The list of mechanism-specific configurations. security-domain The security domain to associate with the resource. Table 4.7. http-authentication-factory mechanism-configurations attributes Attribute Description credential-security-factory The security factory to use to obtain a credential as required by the mechanism. final-principal-transformer A final principal transformer to apply for this mechanism realm. host-name The host name this configuration applies to. mechanism-name This configuration will only apply where a mechanism with the name specified is used. If this attribute is omitted then this will match any mechanism name. mechanism-realm-configurations The list of definitions of the realm names as understood by the mechanism. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. post-realm-principal-transformer A principal transformer to apply after the realm is selected. protocol The protocol this configuration applies to. realm-mapper The realm mapper to be used by the mechanism. Table 4.8. http-authentication-factory mechanism-configurations mechanism-realm-configurations attributes Attribute Description final-principal-transformer A final principal transformer to apply for this mechanism realm. post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. realm-mapper The realm mapper to be used by the mechanism. realm-name The name of the realm to be presented by the mechanism. 4.7. jaas-realm attributes You can configure jaas-realm by setting its attributes. All the attributes except entry are optional. Table 4.9. jaas-realm attributes attribute description callback-handler Callback handler to use with the Login Context. Security property auth.login.defaultCallbackHandler can be used instead. The default callback handler of the realm is used if none of these are defined. entry The entry name to use to initialize LoginContext . module The module with custom LoginModules and CallbackHandler classes. path The optional path to JAAS configuration file. You can also specify the location with java system property java.security.auth.login.config or with java security property login.config.url . relative-to If you provide relative-to , the value of the path attribute is treated as relative to the path specified by this attribute. 4.8. module command arguments You can use different arguments with the module command. Table 4.10. module command arguments Argument Description --absolute-resources Use this argument to specify a list of absolute file system paths to reference from its module.xml file. The files specified are not copied to the module directory. See --resource-delimiter for delimiter details. --allow-nonexistent-resources Use this argument to create empty directories for resources specified by --resources that do not exist. The module add command will fail if there are resources that do not exist and this argument is not used. --dependencies Use this argument to provide a comma-separated list of module names that this module depends on. --export-dependencies Use this argument to specify exported dependencies. --main-class Use this argument to specify the fully qualified class name that declares the module's main method. --module-root-dir Use this argument if you have defined an external JBoss EAP module directory to use instead of the default EAP_HOME /modules/ directory. --module-xml Use this argument to provide a file system path to a module.xml to use for this new module. This file is copied to the module directory. If this argument is not specified, a module.xml file is generated in the module directory. --name Use this argument to provide the name of the module to add. This argument is required. --properties Use this argument to provide a comma-separated list of PROPERTY_NAME = PROPERTY_VALUE pairs that define module properties. --resource-delimiter Use this argument to set a user-defined file path separator for the list of resources provided to the --resources or absolute-resources argument. If not set, the file path separator is a colon ( : ) for Linux and a semicolon ( ; ) for Windows. --resources Use this argument to specify the resources for this module by providing a list of file system paths. The files are copied to this module directory and referenced from its module.xml file. If you a provide a path to a directory, the directory and its contents are copied to the module directory. Symbolic links are not preserved; linked resources are copied to the module directory. This argument is required unless --absolute-resources or --module-xml is provided. See --resource-delimiter for delimiter details. --slot Use this argument to add the module to a slot other than the default main slot. 4.9. periodic-rotating-file-audit-log attributes Table 4.11. periodic-rotating-file-audit-log attributes Attribute Description autoflush Specifies if the output stream requires flushing after every audit event. If you do not define the attribute, the synchronized attribute value is the default. encoding Specifies the audit file encoding. The default is UTF-8 . The possible values are the following: UTF-8 UTF-16BE UTF-16LE UTF-16 US-ASCII ISO-8859-1 format Use SIMPLE for human readable text format or JSON for storing individual events in JSON . path Defines the location of the log files. relative-to Optional attribute. Defines the location of the log files. suffix Optional attribute. Adds a date suffix to a rotated log. You must use the java.time.format.DateTimeFormatter format. For example .yyyy-MM-dd . synchronized Default value is true . Specifies that the file descriptor gets synchronized after every audit event. 4.10. sasl-authentication-factory attributes You can configure sasl-authentication-factory by setting its attributes. Table 4.12. sasl-authentication-factory attributes Attribute Description mechanism-configurations The list of mechanism specific configurations. sasl-server-factory The SASL server factory to associate with this resource. security-domain The security domain to associate with this resource. Table 4.13. sasl-authentication-factory mechanism-configurations attributes Attribute Description credential-security-factory The security factory to use to obtain a credential as required by the mechanism. final-principal-transformer A final principal transformer to apply for this mechanism realm. host-name The host name this configuration applies to. mechanism-name This configuration will only apply where a mechanism with the name specified is used. If this attribute is omitted then this will match any mechanism name. mechanism-realm-configurations The list of definitions of the realm names as understood by the mechanism. protocol The protocol this configuration applies to. post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. realm-mapper The realm mapper to be used by the mechanism. Table 4.14. sasl-authentication-factory mechanism-configurations mechanism-realm-configurations attributes Attribute Description final-principal-transformer A final principal transformer to apply for this mechanism realm. post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. realm-mapper The realm mapper to be used by the mechanism. realm-name The name of the realm to be presented by the mechanism. 4.11. security-domain attributes You can configure security-domain by setting its attributes. Attribute Description default-realm The default realm contained by this security domain. evidence-decoder A reference to an EvidenceDecoder to be used by this domain. outflow-anonymous This attribute specifies whether the anonymous identity should be used if outflow to a security domain is not possible, which happens in the following scenarios: The domain to outflow to does not trust this domain. The identity being outflowed to a domain does not exist in that domain Outflowing anonymous identity clears any previously established identity for that domain. outflow-security-domains The list of security domains that the security identity from this domain should automatically outflow to. permission-mapper A reference to a PermissionMapper to be used by this domain. post-realm-principal-transformer A reference to a principal transformer to be applied after the realm has operated on the supplied identity name. pre-realm-principal-transformer A reference to a principal transformer to be applied before the realm is selected. principal-decoder A reference to a PrincipalDecoder to be used by this domain. realm-mapper Reference to the RealmMapper to be used by this domain. realms The list of realms contained by this security domain. role-decoder Reference to the RoleDecoder to be used by this domain. role-mapper Reference to the RoleMapper to be used by this domain. security-event-listener Reference to a listener for security events. trusted-security-domains The list of security domains that are trusted by this security domain. trusted-virtual-security-domains The list of virtual security domains that are trusted by this security domain. 4.12. simple-role-decoder attributes You can configure simple role decoder by setting its attribute. Table 4.15. simple-role-decoder attributes Attribute Description attribute The name of the attribute from the identity to map directly to roles. 4.13. size-rotating-file-audit-log attributes Table 4.16. size-rotating-file-audit-log attributes Attribute Description autoflush Specifies if the output stream requires flushing after every audit event. If you do not define the attribute, the synchronized attribute value is the default. encoding Specifies the audit file encoding. The default is UTF-8 . The possible values are the following: UTF-8 UTF-16BE UTF-16LE UTF-16 US-ASCII ISO-8859-1 format Default value is SIMPLE . Use SIMPLE for human readable text format or JSON for storing individual events in JSON . max-backup-index The maximum number of files to back up when rotating. The default value is 1 . path Defines the location of the log files. relative-to Optional attribute. Defines the location of the log files. rotate-on-boot By default, Elytron does not create a new log file when you restart a server. Set this attribute to true to rotate the log on server restart. rotate-size The maximum size that the log file can reach before Elytron rotates the log. The default is 10m for 10 megabytes. You can also define the maximum size of the log with k, g, b, or t units. You can specify units in either uppercase or lowercase characters. suffix Optional attribute. Adds a date suffix to a rotated log. You must use the java.text.format.DateTimeFormatter format. For example .yyyy-MM-dd-HH . synchronized Default value is true . Specifies that the file descriptor gets synchronized after every audit event. 4.14. syslog-audit-log attributes Table 4.17. syslog-audit-log attributes Attribute Description format The format in which audit events are recorded. Supported values: JSON SIMPLE Default value: SIMPLE host-name The host name to be embedded into all events sent to the syslog server. port The listening port on the syslog server. reconnect-attempts The maximum number of times that Elytron will attempt to send successive messages to a syslog server before closing the connection. The value of this attribute is only valid when the transmission protocol used is UDP. Supported values: Any positive integer value. -1 indicates infinite reconnect attempts. Default value: 0 server-address IP address of the syslog server or a name that can be resolved by Java's InetAddress.getByName() method. ssl-context The SSL context to use when connecting to the syslog server. This attribute is only required if transport is set to SSL_TCP . syslog-format The RFC format to be used for describing the audit event. Supported values: RFC3164 RFC5424 Default value: RFC5424 transport The transport layer protocol to use to connect to the syslog server. Supported values: SSL_TCP TCP UDP Default value: TCP | [
"module add --name=com.mysql --resources= /path/to /{MySQLDriverJarName} --export-dependencies=wildflyee.api,java.se",
"module add --module-root-dir= /path/to /my-external-modules/ --name=com.mysql --resources= /path/to /{MySQLDriverJarName} --dependencies=wildflyee.api,java.se",
"module add --name=com.mysql --slot=8.0 --resources= /path/to /{MySQLDriverJarName} --dependencies=wildflyee.api,java.se"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_multiple_identity_stores/reference |
Chapter 5. Understanding Bindings in WSDL | Chapter 5. Understanding Bindings in WSDL Abstract Bindings map the logical messages used to define a service into a concrete payload format that can be transmitted and received by an endpoint. Overview Bindings provide a bridge between the logical messages used by a service to a concrete data format that an endpoint uses in the physical world. They describe how the logical messages are mapped into a payload format that is used on the wire by an endpoint. It is within the bindings that details such as parameter order, concrete data types, and return values are specified. For example, the parts of a message can be reordered in a binding to reflect the order required by an RPC call. Depending on the binding type, you can also identify which of the message parts, if any, represent the return type of a method. Port types and bindings Port types and bindings are directly related. A port type is an abstract definition of a set of interactions between two logical services. A binding is a concrete definition of how the messages used to implement the logical services will be instantiated in the physical world. Each binding is then associated with a set of network details that finish the definition of one endpoint that exposes the logical service defined by the port type. To ensure that an endpoint defines only a single service, WSDL requires that a binding can only represent a single port type. For example, if you had a contract with two port types, you could not write a single binding that mapped both of them into a concrete data format. You would need two bindings. However, WSDL allows for a port type to be mapped to several bindings. For example, if your contract had a single port type, you could map it into two or more bindings. Each binding could alter how the parts of the message are mapped or they could specify entirely different payload formats for the message. The WSDL elements Bindings are defined in a contract using the WSDL binding element. The binding element consists of attributes like, name , that specifies a unique name for the binding and type that provides reference to PortType. The value of this attribute is used to associate the binding with an endpoint as discussed in Chapter 4, Defining Your Logical Interfaces . The actual mappings are defined in the children of the binding element. These elements vary depending on the type of payload format you decide to use. The different payload formats and the elements used to specify their mappings are discussed in the following chapters. Adding to a contract Apache CXF provides command line tools that can generate bindings for predefined service interfaces. The tools will add the proper elements to your contract for you. However, we recommended that you have some knowledge of how the different types of bindings work. You can also add a binding to a contract using any text editor. When hand editing a contract, you are responsible for ensuring that the contract is valid. Supported bindings Apache CXF supports the following bindings: SOAP 1.1 SOAP 1.2 CORBA Pure XML | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/fusecxfbindingintro |
Chapter 5. Using iPXE to reduce provisioning times | Chapter 5. Using iPXE to reduce provisioning times iPXE is an open-source network-boot firmware. It provides a full PXE implementation enhanced with additional features, such as booting from an HTTP server. For more information about iPXE, see iPXE website . You can use iPXE if the following restrictions prevent you from using PXE: A network with unmanaged DHCP servers. A PXE service that is unreachable because of, for example, a firewall restriction. A TFTP UDP-based protocol that is unreliable because of, for example, a low-bandwidth network. 5.1. Prerequisites for using iPXE You can use iPXE to boot virtual machines in the following cases: Your virtual machines run on a hypervisor that uses iPXE as primary firmware. Your virtual machines are in BIOS mode. In this case, you can configure PXELinux to chainboot iPXE and boot by using the HTTP protocol. For booting virtual machines in UEFI mode by using HTTP, you can follow Section 4.5, "Creating hosts with UEFI HTTP boot provisioning" instead. Supportability Red Hat does not officially support iPXE in Red Hat Satellite. For more information, see Supported architectures and kickstart scenarios in Satellite 6 in the Red Hat Knowledgebase . Host requirements The MAC address of the provisioning interface matches the host configuration. The provisioning interface of the host has a valid DHCP reservation. The NIC is capable of PXE booting. For more information, see supported hardware on ipxe.org for a list of hardware drivers expected to work with an iPXE-based boot disk. The NIC is compatible with iPXE. 5.2. Configuring iPXE environment Configure an iPXE environment on all Capsules that you want to use for iPXE provisioning. Important In Red Hat Enterprise Linux, security-related features of iPXE are not supported and the iPXE binary is built without security features. For this reason, you can only use HTTP but not HTTPS. For more information, see Red Hat Enterprise Linux HTTPS support in iPXE . Prerequisites If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . Procedure Enable the TFTP and HTTPboot services on your Capsule: Install the ipxe-bootimgs package on your Capsule: Copy iPXE firmware to the TFTP directory. Copy the iPXE firmware with the Linux kernel header: Copy the UNDI iPXE firmware: Correct the SELinux file contexts: Set the HTTP URL. If you want to use Satellite Server for booting, run the following command on Satellite Server: If you want to use Capsule Server for booting, run the following command on Capsule Server: 5.3. Booting virtual machines Some virtualization hypervisors use iPXE as primary firmware for PXE booting. If you use such a hypervisor, you can boot virtual machines without TFTP and PXELinux. Booting a virtual machine has the following workflow: Virtual machine starts. iPXE retrieves the network credentials, including an HTTP URL, by using DHCP. iPXE loads the iPXE bootstrap template from Capsule. iPXE loads the iPXE template with MAC as a URL parameter from Capsule. iPXE loads the kernel and initial RAM disk of the installer. Prerequisites Your hypervisor must support iPXE. The following virtualization hypervisors support iPXE: libvirt Red Hat Virtualization (deprecated) You have configured your iPXE environment. For more information, see Section 5.2, "Configuring iPXE environment" . Note You can use the original templates shipped in Satellite as described below. If you require modification to an original template, clone the template, edit the clone, and associate the clone instead of the original template. For more information, see Section 2.15, "Cloning provisioning templates" . Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Search for the Kickstart default iPXE template. Click the name of the template. Click the Association tab and select the operating systems that your host uses. Click the Locations tab and add the location where the host resides. Click the Organizations tab and add the organization that the host belongs to. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system of your host. Click the Templates tab. From the iPXE template list, select the Kickstart default iPXE template. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > All Hosts . In the Hosts page, select the host that you want to use. Select the Operating System tab. Set PXE Loader to iPXE Embedded . Select the Templates tab. In Provisioning Templates , click Resolve and verify that the iPXE template resolves to the required template. Click Submit to save host settings. 5.4. Chainbooting iPXE from PXELinux You can set up iPXE to use a built-in driver for network communication ( ipxe.lkrn ) or Universal Network Device Interface (UNDI) ( undionly-ipxe.0 ). You can choose to load either file depending on the networking hardware capabilities and iPXE driver availability. UNDI is a minimalistic UDP/IP stack that implements TFTP client. However, UNDI cannot support other protocols like HTTP. To use HTTP with iPXE, use the iPXE build with built-in drivers ( ipxe.lkrn ). Chainbooting iPXE has the following workflow: Host powers on. PXE driver retrieves the network credentials by using DHCP. PXE driver retrieves the PXELinux firmware pxelinux.0 by using TFTP. PXELinux searches for the configuration file on the TFTP server. PXELinux chainloads iPXE ipxe.lkrn or undionly-ipxe.0 . iPXE retrieves the network credentials, including an HTTP URL, by using DHCP again. iPXE chainloads the iPXE template from your Templates Capsule. iPXE loads the kernel and initial RAM disk of the installer. Prerequisites You have configured your iPXE environment. For more information, see Section 5.2, "Configuring iPXE environment" . Note You can use the original templates shipped in Satellite as described below. If you require modification to an original template, clone the template, edit the clone, and associate the clone instead of the original template. For more information, see Section 2.15, "Cloning provisioning templates" . Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Search for the required PXELinux template: PXELinux chain iPXE to use ipxe.lkrn PXELinux chain iPXE UNDI to use undionly-ipxe.0 Click the name of the template you want to use. Click the Association tab and select the operating systems that your host uses. Click the Locations tab and add the location where the host resides. Click the Organizations tab and add the organization that the host belongs to. Click Submit to save the changes. On the Provisioning Templates page, search for the Kickstart default iPXE template. Click the name of the template. Click the Association tab and associate the template with the operating system that your host uses. Click the Locations tab and add the location where the host resides. Click the Organizations tab and add the organization that the host belongs to. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system of your host. Click the Templates tab. From the PXELinux template list, select the template you want to use. From the iPXE template list, select the Kickstart default iPXE template. Click Submit to save the changes. In the Satellite web UI, navigate to Configure > Host Groups , and select the host group you want to configure. Select the Operating System tab. Select the Architecture and Operating system . Set the PXE Loader : Select PXELinux BIOS to chainboot iPXE ( ipxe.lkrn ) from PXELinux. Select iPXE Chain BIOS to load undionly-ipxe.0 directly. | [
"satellite-installer --foreman-proxy-httpboot true --foreman-proxy-tftp true",
"satellite-maintain packages install ipxe-bootimgs",
"cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/",
"cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0",
"restorecon -RvF /var/lib/tftpboot/",
"satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// satellite.example.com /unattended/iPXE?bootstrap=1\"",
"satellite-installer --foreman-proxy-dhcp-ipxe-bootstrap true"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/using-ipxe-to-reduce-provisioning-times_provisioning |
4.325. tmpwatch | 4.325. tmpwatch 4.325.1. RHBA-2011:1199 - tmpwatch bug fix update An updated tmpwatch package that fixes one bug is now available for Red Hat Enterprise Linux 6. The tmpwatch utility recursively searches through specified directories and removes files which have not been accessed in a specified period of time. Tmpwatch is normally used to clean up directories which are used for temporarily holding files (for example, /tmp). Bug Fix BZ# 722856 When searching for files or directories to remove, tmpwatch was reporting all failures to access these files or directories. This included expected access failures due to the restrictive default configuration of FUSE mount points. With this update, tmpwatch now silently ignores all EACCES errors, and the expected access failures regarding FUSE mount points are no longer reported. All users are advised to upgrade to this updated tmpwatch package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/tmpwatch |
19.3. Unmounting a File System | 19.3. Unmounting a File System To detach a previously mounted file system, use either of the following variants of the umount command: Note that unless this is performed while logged in as root , the correct permissions must be available to unmount the file system. For more information, see Section 19.2.2, "Specifying the Mount Options" . See Example 19.9, "Unmounting a CD" for an example usage. Important When a file system is in use (for example, when a process is reading a file on this file system, or when it is used by the kernel), running the umount command fails with an error. To determine which processes are accessing the file system, use the fuser command in the following form: For example, to list the processes that are accessing a file system mounted to the /media/cdrom/ directory: Example 19.9. Unmounting a CD To unmount a CD that was previously mounted to the /media/cdrom/ directory, use the following command: | [
"umount directory USD umount device",
"fuser -m directory",
"fuser -m /media/cdrom /media/cdrom: 1793 2013 2022 2435 10532c 10672c",
"umount /media/cdrom"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/sect-using_the_mount_command-unmounting |
4.39. cups | 4.39. cups 4.39.1. RHSA-2011:1635 - Low: cups security and bug fix update Updated cups packages that fix one security issue and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Common UNIX Printing System (CUPS) provides a portable printing layer for UNIX operating systems. Security Fix CVE-2011-2896 A heap-based buffer overflow flaw was found in the Lempel-Ziv-Welch (LZW) decompression algorithm implementation used by the CUPS GIF image format reader. An attacker could create a malicious GIF image file that, when printed, could possibly cause CUPS to crash or, potentially, execute arbitrary code with the privileges of the "lp" user. Bug Fixes BZ# 681836 Previously CUPS was not correctly handling the language setting LANG=en_US.ASCII. As a consequence lpadmin, lpstat and lpinfo binaries were not displaying any output when the LANG=en_US.ASCII environment variable was used. As a result of this update the problem is fixed and the expected output is now displayed. BZ# 706673 Previously the scheduler did not check for empty values of several configuration directives. As a consequence it was possible for the CUPS daemon (cupsd) to crash when a configuration file contained certain empty values. With this update the problem is fixed and cupsd no longer crashes when reading such a configuration file. BZ# 709896 Previously when printing to a raw print queue, when using certain printer models, CUPS was incorrectly sending SNMP queries. As a consequence there was a noticeable 4-second delay between queueing the job and the start of printing. With this update the problem is fixed and CUPS no longer tries to collect SNMP supply and status information for raw print queues. BZ# 712430 Previously when using the BrowsePoll directive it could happen that the CUPS printer polling daemon (cups-polld) began polling before the network interfaces were set up after a system boot. CUPS was then caching the failed hostname lookup. As a consequence no printers were found and the error, "Host name lookup failure", was logged. With this update the code that re-initializes the resolver after failure in cups-polld is fixed and as a result CUPS will obtain the correct network settings to use in printer discovery. BZ# 735505 The MaxJobs directive controls the maximum number of print jobs that are kept in memory. Previously, once the number of jobs reached the limit, the CUPS system failed to automatically purge the data file associated with the oldest completed job from the system in order to make room for a new print job. This bug has been fixed, and the jobs beyond the set limit are now properly purged. BZ# 744791 The cups init script (/etc/rc.d/init.d/cups) uses the daemon function (from /etc/rc.d/init.d/functions) to start the cups process, but previously it did not source a configuration file from the /etc/sysconfig/ directory. As a consequence, it was difficult to cleanly set the nice level or cgroup for the cups daemon by setting the NICELEVEL or CGROUP_DAEMON variables. With this update, the init script is fixed. All users of CUPS are advised to upgrade to these updated packages, which contain backported patches to resolve these issues. After installing this update, the cupsd daemon will be restarted automatically. 4.39.2. RHBA-2012:0418 - cups bug fix update Updated cups packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Common UNIX Printing System (CUPS) provides a portable printing layer for Linux, UNIX, and similar operating systems. Bug Fix BZ# 803419 Previously, empty jobs could be created using the "lp" command either by submitting an empty file to print (for example by executing "lp /dev/null") or by providing an empty file as standard input. In this way, a job was created but was never processed. With this update, creation of empty print jobs is not allowed, and the user is now informed that no file is in the request. All users of cups are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cups |
Chapter 1. Notification of name change to Streams for Apache Kafka | Chapter 1. Notification of name change to Streams for Apache Kafka AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat's product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_openshift/ref-name-change-str |
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure | Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure 4.1. Replacing operational or failed storage devices on Azure installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Azure installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Azure installer-provisioned infrastructure . Replacing failed nodes on Azure installer-provisioned infrastructures . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_microsoft_azure |
28.6. Using the Maintenance Boot Modes | 28.6. Using the Maintenance Boot Modes 28.6.1. Verifying Boot Media You can test the integrity of an ISO-based installation source before using it to install Red Hat Enterprise Linux. These sources include DVD, and ISO images stored on a hard drive or NFS server. Verifying that the ISO images are intact before you attempt an installation helps to avoid problems that are often encountered during installation. Red Hat Enterprise Linux offers you two ways to test installation ISOs: select OK at the prompt to test the media before installation when booting from the Red Hat Enterprise Linux DVD boot Red Hat Enterprise Linux with the option mediacheck option. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-boot-modes |
Chapter 9. cert-manager Operator for Red Hat OpenShift | Chapter 9. cert-manager Operator for Red Hat OpenShift 9.1. cert-manager Operator for Red Hat OpenShift overview The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that provides application certificate lifecycle management. The cert-manager Operator for Red Hat OpenShift allows you to integrate with external certificate authorities and provides certificate provisioning, renewal, and retirement. 9.1.1. About the cert-manager Operator for Red Hat OpenShift The cert-manager project introduces certificate authorities and certificates as resource types in the Kubernetes API, which makes it possible to provide certificates on demand to developers working within your cluster. The cert-manager Operator for Red Hat OpenShift provides a supported way to integrate cert-manager into your OpenShift Container Platform cluster. The cert-manager Operator for Red Hat OpenShift provides the following features: Support for integrating with external certificate authorities Tools to manage certificates Ability for developers to self-serve certificates Automatic certificate renewal Important Do not attempt to use both cert-manager Operator for Red Hat OpenShift for OpenShift Container Platform and the community cert-manager Operator at the same time in your cluster. Also, you should not install cert-manager Operator for Red Hat OpenShift for OpenShift Container Platform in multiple namespaces within a single OpenShift cluster. 9.1.2. cert-manager Operator for Red Hat OpenShift issuer providers The cert-manager Operator for Red Hat OpenShift has been tested with the following issuer types: Automated Certificate Management Environment (ACME) Certificate Authority (CA) Self-signed Vault Venafi Nokia NetGuard Certificate Manager (NCM) Google cloud Certificate Authority Service (Google CAS) 9.1.2.1. Testing issuer types The following table outlines the test coverage for each tested issuer type: Issuer Type Test Status Notes ACME Fully Tested Verified with standard ACME implementations. CA Fully Tested Ensures basic CA functionality. Self-signed Fully Tested Ensures basic self-signed functionality. Vault Fully Tested Limited to standard Vault setups due to infrastructure access constraints. Venafi Partially tested Subject to provider-specific limitations. NCM Partially Tested Subject to provider-specific limitations. Google CAS Partially Tested Compatible with common CA configurations. Note OpenShift Container Platform does not test all factors associated with third-party cert-manager Operator for Red Hat OpenShift provider functionality. For more information about third-party support, see the OpenShift Container Platform third-party support policy . 9.1.3. Certificate request methods There are two ways to request a certificate using the cert-manager Operator for Red Hat OpenShift: Using the cert-manager.io/CertificateRequest object With this method a service developer creates a CertificateRequest object with a valid issuerRef pointing to a configured issuer (configured by a service infrastructure administrator). A service infrastructure administrator then accepts or denies the certificate request. Only accepted certificate requests create a corresponding certificate. Using the cert-manager.io/Certificate object With this method, a service developer creates a Certificate object with a valid issuerRef and obtains a certificate from a secret that they pointed to the Certificate object. 9.1.4. Supported cert-manager Operator for Red Hat OpenShift versions For the list of supported versions of the cert-manager Operator for Red Hat OpenShift across different OpenShift Container Platform releases, see the "Platform Agnostic Operators" section in the OpenShift Container Platform update and support policy . 9.1.5. About FIPS compliance for cert-manager Operator for Red Hat OpenShift Starting with version 1.14.0, cert-manager Operator for Red Hat OpenShift is designed for FIPS compliance. When running on OpenShift Container Platform in FIPS mode, it uses the RHEL cryptographic libraries submitted to NIST for FIPS validation on the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic module validation program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance activities and government standards . To enable FIPS mode, you must install cert-manager Operator for Red Hat OpenShift on an OpenShift Container Platform cluster configured to operate in FIPS mode. For more information, see "Do you need extra security for your cluster?" 9.1.6. Additional resources cert-manager project documentation Understanding compliance Installing a cluster in FIPS mode Do you need extra security for your cluster? 9.2. cert-manager Operator for Red Hat OpenShift release notes The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that provides application certificate lifecycle management. These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift . 9.2.1. cert-manager Operator for Red Hat OpenShift 1.15.1 Issued: 2025-03-13 The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.15.1: RHEA-Advisory-2733 RHEA-Advisory-2780 RHEA-Advisory-2821 RHEA-Advisory-2828 Version 1.15.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.15.5 . For more information, see the cert-manager project release notes for v1.15.5 . 9.2.1.1. New features and enhancements Integrating the cert-manager Operator for Red Hat OpenShift with Istio-CSR (Technology Preview) The cert-manager Operator for Red Hat OpenShift now supports the Istio-CSR. With this integration, cert-manager Operator's issuers can issue, sign, and renew certificates for mutual TLS (mTLS) communication. Red Hat OpenShift Service Mesh and Istio can now request these certificates directly from the cert-manager Operator. For more information, see Integrating the cert-manager Operator with Istio-CSR . 9.2.1.2. CVEs CVE-2024-9287 CVE-2024-45336 CVE-2024-45341 9.2.2. cert-manager Operator for Red Hat OpenShift 1.15.0 Issued: 2025-01-22 The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.15.0: RHEA-2025:0487 RHSA-2025:0535 RHSA-2025:0536 Version 1.15.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.15.4 . For more information, see the cert-manager project release notes for v1.15.4 . 9.2.2.1. New features and enhancements Scheduling overrides for cert-manager Operator for Red Hat OpenShift With this release, you can configure scheduling overrides for cert-manager Operator for Red Hat OpenShift, including the cert-manager controller, webhook, and CA injector. Google CAS issuer The cert-manager Operator for Red Hat OpenShift now supports the Google Certificate Authority Service (CAS) issuer. The google-cas-issuer is an external issuer for cert-manager that automates certificate lifecycle management, including issuance and renewal, with CAS-managed private certificate authorities. Note The Google CAS issuer is validated only with version 0.9.0 and cert-manager Operator for Red Hat OpenShift version 1.15.0. These versions support tasks such as issuing, renewing, and managing certificates for the API server and ingress controller in OpenShift Container Platform clusters. Default installMode updated to AllNamespaces Starting from version 1.15.0, the default and recommended Operator Lifecycle Manager (OLM) installMode is AllNamespaces . Previously, the default was SingleNamespace . This change aligns with best practices for multi-namespace Operator management. For more information, see OCPBUGS-23406 . Redundant kube-rbac-proxy sidecar removed The Operator no longer includes the redundant kube-rbac-proxy sidecar container, reducing resource usage and complexity. For more information, see CM-436 . 9.2.2.2. CVEs CVE-2024-35255 CVE-2024-28180 CVE-2024-24783 CVE-2024-6104 CVE-2023-45288 CVE-2024-45337 CVE-2024-45338 9.2.3. cert-manager Operator for Red Hat OpenShift 1.14.1 Issued: 2024-11-04 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.14.1: RHEA-2024:8787 Version 1.14.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.14.7 . For more information, see the cert-manager project release notes for v1.14.7 . 9.2.3.1. CVEs CVE-2024-33599 CVE-2024-2961 9.2.4. cert-manager Operator for Red Hat OpenShift 1.14.0 Issued: 2024-07-08 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.14.0: RHEA-2024:4360 Version 1.14.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.14.5 . For more information, see the cert-manager project release notes for v1.14.5 . 9.2.4.1. New features and enhancements FIPS compliance support With this release, FIPS mode is now automatically enabled for cert-manager Operator for Red Hat OpenShift. When installed on an OpenShift Container Platform cluster in FIPS mode, cert-manager Operator for Red Hat OpenShift ensures compatibility without affecting the cluster's FIPS support status. Securing routes with cert-manager managed certificates (Technology Preview) With this release, you can manage certificates referenced in Route resources by using the cert-manager Operator for Red Hat OpenShift. For more information, see Securing routes with the cert-manager Operator for Red Hat OpenShift . NCM issuer The cert-manager Operator for Red Hat OpenShift now supports the Nokia NetGuard Certificate Manager (NCM) issuer. The ncm-issuer is a cert-manager external issuer that integrates with the NCM PKI system using a Kubernetes controller to sign certificate requests. This integration streamlines the process of obtaining non-self-signed certificates for applications, ensuring their validity and keeping them updated. Note The NCM issuer is validated only with version 1.1.1 and the cert-manager Operator for Red Hat OpenShift version 1.14.0. This version handles tasks such as issuance, renewal, and managing certificates for the API server and ingress controller of OpenShift Container Platform clusters. 9.2.4.2. CVEs CVE-2023-45288 CVE-2024-28180 CVE-2020-8559 CVE-2024-26147 CVE-2024-24783 9.2.5. cert-manager Operator for Red Hat OpenShift 1.13.1 Issued: 2024-05-15 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.13.1: RHEA-2024:2849 Version 1.13.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.13.6 . For more information, see the cert-manager project release notes for v1.13.6 . 9.2.5.1. CVEs CVE-2023-45288 CVE-2023-48795 CVE-2024-24783 9.2.6. cert-manager Operator for Red Hat OpenShift 1.13.0 Issued: 2024-01-16 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.13.0: RHEA-2024:0259 Version 1.13.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.13.3 . For more information, see the cert-manager project release notes for v1.13.0 . 9.2.6.1. New features and enhancements You can now manage certificates for API Server and Ingress Controller by using the cert-manager Operator for Red Hat OpenShift. For more information, see Configuring certificates with an issuer . With this release, the scope of the cert-manager Operator for Red Hat OpenShift, which was previously limited to the OpenShift Container Platform on AMD64 architecture, has now been expanded to include support for managing certificates on OpenShift Container Platform running on IBM Z(R) ( s390x ), IBM Power(R) ( ppc64le ) and ARM64 architectures. With this release, you can use DNS over HTTPS (DoH) for performing the self-checks during the ACME DNS-01 challenge verification. The DNS self-check method can be controlled by using the command line flags, --dns01-recursive-nameservers-only and --dns01-recursive-nameservers . For more information, see Customizing cert-manager by overriding arguments from the cert-manager Operator API . 9.2.6.2. CVEs CVE-2023-39615 CVE-2023-3978 CVE-2023-37788 CVE-2023-29406 9.3. Installing the cert-manager Operator for Red Hat OpenShift Important The cert-manager Operator for Red Hat OpenShift version 1.15 or later supports the AllNamespaces , SingleNamespace , and OwnNamespace installation modes. Earlier versions, such as 1.14, support only the SingleNamespace and OwnNamespace installation modes. The cert-manager Operator for Red Hat OpenShift is not installed in OpenShift Container Platform by default. You can install the cert-manager Operator for Red Hat OpenShift by using the web console. 9.3.1. Installing the cert-manager Operator for Red Hat OpenShift 9.3.1.1. Installing the cert-manager Operator for Red Hat OpenShift by using the web console You can use the web console to install the cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter cert-manager Operator for Red Hat OpenShift into the filter box. Select the cert-manager Operator for Red Hat OpenShift Select the cert-manager Operator for Red Hat OpenShift version from Version drop-down list, and click Install . Note See supported cert-manager Operator for Red Hat OpenShift versions in the following "Additional resources" section. On the Install Operator page: Update the Update channel , if necessary. The channel defaults to stable-v1 , which installs the latest stable release of the cert-manager Operator for Red Hat OpenShift. Choose the Installed Namespace for the Operator. The default Operator namespace is cert-manager-operator . If the cert-manager-operator namespace does not exist, it is created for you. Note During the installation, the OpenShift Container Platform web console allows you to select between AllNamespaces and SingleNamespace installation modes. For installations with cert-manager Operator for Red Hat OpenShift version 1.15.0 or later, it is recommended to choose the AllNamespaces installation mode. SingleNamespace and OwnNamespace support will remain for earlier versions but will be deprecated in future versions. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that cert-manager Operator for Red Hat OpenShift is listed with a Status of Succeeded in the cert-manager-operator namespace. Verify that cert-manager pods are up and running by entering the following command: USD oc get pods -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s You can use the cert-manager Operator for Red Hat OpenShift only after cert-manager pods are up and running. 9.3.1.2. Installing the cert-manager Operator for Red Hat OpenShift by using the CLI Prerequisites You have access to the cluster with cluster-admin privileges. Procedure Create a new project named cert-manager-operator by running the following command: USD oc new-project cert-manager-operator Create an OperatorGroup object: Create a YAML file, for example, operatorGroup.yaml , with the following content: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - "cert-manager-operator" For cert-manager Operator for Red Hat OpenShift v1.15.0 or later, create a YAML file with the following content: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: [] spec: {} Note Starting from cert-manager Operator for Red Hat OpenShift version 1.15.0, it is recommended to install the Operator using the AllNamespaces OLM installMode . Older versions can continue using the SingleNamespace or OwnNamespace OLM installMode . Support for SingleNamespace and OwnNamespace will be deprecated in future versions. Create the OperatorGroup object by running the following command: USD oc create -f operatorGroup.yaml Create a Subscription object: Create a YAML file, for example, subscription.yaml , that defines the Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic Create the Subscription object by running the following command: USD oc create -f subscription.yaml Verification Verify that the OLM subscription is created by running the following command: USD oc get subscription -n cert-manager-operator Example output NAME PACKAGE SOURCE CHANNEL openshift-cert-manager-operator openshift-cert-manager-operator redhat-operators stable-v1 Verify whether the Operator is successfully installed by running the following command: USD oc get csv -n cert-manager-operator Example output NAME DISPLAY VERSION REPLACES PHASE cert-manager-operator.v1.13.0 cert-manager Operator for Red Hat OpenShift 1.13.0 cert-manager-operator.v1.12.1 Succeeded Verify that the status cert-manager Operator for Red Hat OpenShift is Running by running the following command: USD oc get pods -n cert-manager-operator Example output NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-695b4d46cb-r4hld 2/2 Running 0 7m4s Verify that the status of cert-manager pods is Running by running the following command: USD oc get pods -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-58b7f649c4-dp6l4 1/1 Running 0 7m1s cert-manager-cainjector-5565b8f897-gx25h 1/1 Running 0 7m37s cert-manager-webhook-9bc98cbdd-f972x 1/1 Running 0 7m40s Additional resources Supported cert-manager Operator for Red Hat OpenShift versions 9.3.2. Understanding update channels of the cert-manager Operator for Red Hat OpenShift Update channels are the mechanism by which you can declare the version of your cert-manager Operator for Red Hat OpenShift in your cluster. The cert-manager Operator for Red Hat OpenShift offers the following update channels: stable-v1 stable-v1.y 9.3.2.1. stable-v1 channel The stable-v1 channel is the default and suggested channel while installing the cert-manager Operator for Red Hat OpenShift. The stable-v1 channel installs and updates the latest release version of the cert-manager Operator for Red Hat OpenShift. Select the stable-v1 channel if you want to use the latest stable release of the cert-manager Operator for Red Hat OpenShift. The stable-v1 channel offers the following update approval strategies: Automatic If you choose automatic updates for an installed cert-manager Operator for Red Hat OpenShift, a new version of the cert-manager Operator for Red Hat OpenShift is available in the stable-v1 channel. The Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. Manual If you select manual updates, when a newer version of the cert-manager Operator for Red Hat OpenShift is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the cert-manager Operator for Red Hat OpenShift updated to the new version. 9.3.2.2. stable-v1.y channel The y-stream version of the cert-manager Operator for Red Hat OpenShift installs updates from the stable-v1.y channels such as stable-v1.10 , stable-v1.11 , and stable-v1.12 . Select the stable-v1.y channel if you want to use the y-stream version and stay updated to the z-stream version of the cert-manager Operator for Red Hat OpenShift. The stable-v1.y channel offers the following update approval strategies: Automatic If you choose automatic updates for an installed cert-manager Operator for Red Hat OpenShift, a new z-stream version of the cert-manager Operator for Red Hat OpenShift is available in the stable-v1.y channel. OLM automatically upgrades the running instance of your Operator without human intervention. Manual If you select manual updates, when a newer version of the cert-manager Operator for Red Hat OpenShift is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the cert-manager Operator for Red Hat OpenShift updated to the new version of the z-stream releases. 9.3.3. Additional resources Adding Operators to a cluster Updating installed Operators 9.4. Configuring the egress proxy for the cert-manager Operator for Red Hat OpenShift If a cluster-wide egress proxy is configured in OpenShift Container Platform, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. OLM automatically updates all of the Operator's deployments with the HTTP_PROXY , HTTPS_PROXY , NO_PROXY environment variables. You can inject any CA certificates that are required for proxying HTTPS connections into the cert-manager Operator for Red Hat OpenShift. 9.4.1. Injecting a custom CA certificate for the cert-manager Operator for Red Hat OpenShift If your OpenShift Container Platform cluster has the cluster-wide proxy enabled, you can inject any CA certificates that are required for proxying HTTPS connections into the cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have enabled the cluster-wide proxy for OpenShift Container Platform. Procedure Create a config map in the cert-manager namespace by running the following command: USD oc create configmap trusted-ca -n cert-manager Inject the CA bundle that is trusted by OpenShift Container Platform into the config map by running the following command: USD oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager Update the deployment for the cert-manager Operator for Red Hat OpenShift to use the config map by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}}' Verification Verify that the deployments have finished rolling out by running the following command: USD oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && \ oc rollout status deployment/cert-manager -n cert-manager && \ oc rollout status deployment/cert-manager-webhook -n cert-manager && \ oc rollout status deployment/cert-manager-cainjector -n cert-manager Example output deployment "cert-manager-operator-controller-manager" successfully rolled out deployment "cert-manager" successfully rolled out deployment "cert-manager-webhook" successfully rolled out deployment "cert-manager-cainjector" successfully rolled out Verify that the CA bundle was mounted as a volume by running the following command: USD oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'} Example output [{"mountPath":"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt","name":"trusted-ca","subPath":"ca-bundle.crt"}] Verify that the source of the CA bundle is the trusted-ca config map by running the following command: USD oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes} Example output [{"configMap":{"defaultMode":420,"name":"trusted-ca"},"name":"trusted-ca"}] 9.4.2. Additional resources Configuring proxy support in Operator Lifecycle Manager 9.5. Customizing cert-manager Operator API fields You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. Warning To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. 9.5.1. Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3 1 2 Replace <proxy_url> with the proxy server URL. 3 Replace <ignore_proxy_domains> with a comma separated list of domains. These domains are ignored by the proxy server. Save your changes and quit the text editor to apply your changes. Verification Verify that the cert-manager controller pod is redeployed by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s Verify that environment variables are updated for the cert-manager pod by running the following command: USD oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml Example output env: ... - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS> 9.5.2. Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<server_address>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8 1 Provide a comma-separated list of nameservers to query for the DNS-01 self check. The nameservers can be specified either as <host>:<port> , for example, 1.1.1.1:53 , or use DNS over HTTPS (DoH), for example, https://1.1.1.1/dns-query . 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53 . 4 7 8 Specify to set the log level verbosity to determine the verbosity of log messages. 5 Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402 . 6 You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. Note DNS over HTTPS (DoH) is supported starting only from cert-manager Operator for Red Hat OpenShift version 1.13.0 and later. Save your changes and quit the text editor to apply your changes. Verification Verify that arguments are updated for cert-manager pods by running the following command: USD oc get pods -n cert-manager -o yaml Example output ... metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager ... spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 ... metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager ... spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 ... metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager ... spec: containers: - args: ... - --v=4 9.5.3. Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. Warning If you uninstall the cert-manager Operator for Red Hat OpenShift or delete certificate resources from the cluster, the secret is deleted automatically. This might cause network connectivity issues depending upon where the certificate TLS secret is being used. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Check that the Certificate object and its secret are available by running the following command: USD oc get certificate Example output NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster # ... spec: # ... controllerConfig: overrideArgs: - '--enable-certificate-owner-ref' Save your changes and quit the text editor to apply your changes. Verification Verify that the --enable-certificate-owner-ref flag is updated for cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml Example output # ... metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager # ... spec: containers: - args: - --enable-certificate-owner-ref 9.5.4. Overriding CPU and memory limits for the cert-manager components After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Check that the deployments of the cert-manager controller, CA injector, and Webhook are available by entering the following command: USD oc get deployment -n cert-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m Before setting the CPU and memory limit, check the existing configuration for the cert-manager controller, CA injector, and Webhook by entering the following command: USD oc get deployment -n cert-manager -o yaml Example output # ... metadata: name: cert-manager namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 # ... metadata: name: cert-manager-cainjector namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 # ... metadata: name: cert-manager-webhook namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3 # ... 1 2 3 The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: USD oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 " 1 Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. 2 5 You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m . 3 6 You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi . 4 Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. 7 Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. 8 11 You can specify the CPU limit that a CA injector pod can request. The default value is 10m . 9 12 You can specify the memory limit that a CA injector pod can request. The default value is 32Mi . 10 Defines the amount of CPU and memory set by scheduler for the CA injector pod. 13 Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. 14 17 You can specify the CPU limit that a Webhook pod can request. The default value is 10m . 15 18 You can specify the memory limit that a Webhook pod can request. The default value is 32Mi . 16 Defines the amount of CPU and memory set by scheduler for the Webhook pod. Example output certmanager.operator.openshift.io/cluster patched Verification Verify that the CPU and memory limits are updated for the cert-manager components: USD oc get deployment -n cert-manager -o yaml Example output # ... metadata: name: cert-manager namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... metadata: name: cert-manager-cainjector namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... metadata: name: cert-manager-webhook namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... 9.5.5. Configuring scheduling overrides for cert-manager components You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components such as cert-manager controller, CA injector, and Webhook. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.15.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Update the certmanager.operator custom resource to configure pod scheduling overrides for the desired components by running the following command. Use the overrideScheduling field under the controllerConfig , webhookConfig , or cainjectorConfig sections to define nodeSelector and tolerations settings. USD oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule" 6 1 Defines the nodeSelector for the cert-manager controller deployment. 2 Defines the tolerations for the cert-manager controller deployment. 3 Defines the nodeSelector for the cert-manager webhook deployment. 4 Defines the tolerations for the cert-manager webhook deployment. 5 Defines the nodeSelector for the cert-manager cainjector deployment. 6 Defines the tolerations for the cert-manager cainjector deployment. Verification Verify pod scheduling settings for cert-manager pods: Check the deployments in the cert-manager namespace to confirm they have the correct nodeSelector and tolerations by running the following command: USD oc get pods -n cert-manager -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none> Check the nodeSelector and tolerations settings applied to deployments by running the following command: USD oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{.spec.template.spec.nodeSelector}{"\n"}{.spec.template.spec.tolerations}{"\n\n"}{end}' Example output cert-manager {"kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":""} [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"}] cert-manager-cainjector {"kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":""} [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"}] cert-manager-webhook {"kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":""} [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"}] Verify pod scheduling events in the cert-manager namespace by running the following command: USD oc get events -n cert-manager --field-selector reason=Scheduled 9.6. Authenticating the cert-manager Operator for Red Hat OpenShift You can authenticate the cert-manager Operator for Red Hat OpenShift on the cluster by configuring the cloud credentials. 9.6.1. Authenticating on AWS Prerequisites You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured the Cloud Credential Operator to operate in mint or passthrough mode. Procedure Create a CredentialsRequest resource YAML file, for example, sample-credential-request.yaml , as follows: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "route53:GetChange" effect: Allow resource: "arn:aws:route53:::change/*" - action: - "route53:ChangeResourceRecordSets" - "route53:ListResourceRecordSets" effect: Allow resource: "arn:aws:route53:::hostedzone/*" - action: - "route53:ListHostedZonesByName" effect: Allow resource: "*" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager Create a CredentialsRequest resource by running the following command: USD oc create -f sample-credential-request.yaml Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"aws-creds"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with AWS credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output ... spec: containers: - args: ... - mountPath: /.aws name: cloud-credentials ... volumes: ... - name: cloud-credentials secret: ... secretName: aws-creds 9.6.2. Authenticating with AWS Security Token Service Prerequisites You have extracted and prepared the ccoctl binary. You have configured an OpenShift Container Platform cluster with AWS STS by using the Cloud Credential Operator in manual mode. Procedure Create a directory to store a CredentialsRequest resource YAML file by running the following command: USD mkdir credentials-request Create a CredentialsRequest resource YAML file under the credentials-request directory, such as, sample-credential-request.yaml , by applying the following yaml: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "route53:GetChange" effect: Allow resource: "arn:aws:route53:::change/*" - action: - "route53:ChangeResourceRecordSets" - "route53:ListResourceRecordSets" effect: Allow resource: "arn:aws:route53:::hostedzone/*" - action: - "route53:ListHostedZonesByName" effect: Allow resource: "*" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager Use the ccoctl tool to process CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name <user_defined_name> --region=<aws_region> \ --credentials-requests-dir=<path_to_credrequests_dir> \ --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir> Example output 2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, "arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds" Add the eks.amazonaws.com/role-arn="<aws_role_arn>" annotation to the service account by running the following command: USD oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn="<aws_role_arn>" To create a new pod, delete the existing cert-manager controller pod by running the following command: USD oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager The AWS credentials are applied to a new cert-manager controller pod within a minute. Verification Get the name of the updated cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s Verify that AWS credentials are updated by running the following command: USD oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list Example output # pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller # POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token Additional resources Configuring the Cloud Credential Operator utility 9.6.3. Authenticating on GCP Prerequisites You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured the Cloud Credential Operator to operate in mint or passthrough mode. Procedure Create a CredentialsRequest resource YAML file, such as, sample-credential-request.yaml by applying the following yaml: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager Note The dns.admin role provides admin privileges to the service account for managing Google Cloud DNS resources. To ensure that the cert-manager runs with the service account that has the least privilege, you can create a custom role with the following permissions: dns.resourceRecordSets.* dns.changes.* dns.managedZones.list Create a CredentialsRequest resource by running the following command: USD oc create -f sample-credential-request.yaml Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"gcp-credentials"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with GCP credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output spec: containers: - args: ... volumeMounts: ... - mountPath: /.config/gcloud name: cloud-credentials .... volumes: ... - name: cloud-credentials secret: ... items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials 9.6.4. Authenticating with GCP Workload Identity Prerequisites You extracted and prepared the ccoctl binary. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured an OpenShift Container Platform cluster with GCP Workload Identity by using the Cloud Credential Operator in a manual mode. Procedure Create a directory to store a CredentialsRequest resource YAML file by running the following command: USD mkdir credentials-request In the credentials-request directory, create a YAML file that contains the following CredentialsRequest manifest: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager Note The dns.admin role provides admin privileges to the service account for managing Google Cloud DNS resources. To ensure that the cert-manager runs with the service account that has the least privilege, you can create a custom role with the following permissions: dns.resourceRecordSets.* dns.changes.* dns.managedZones.list Use the ccoctl tool to process CredentialsRequest objects by running the following command: USD ccoctl gcp create-service-accounts \ --name <user_defined_name> --output-dir=<path_to_output_dir> \ --credentials-requests-dir=<path_to_credrequests_dir> \ --workload-identity-pool <workload_identity_pool> \ --workload-identity-provider <workload_identity_provider> \ --project <gcp_project_id> Example command USD ccoctl gcp create-service-accounts \ --name abcde-20230525-4bac2781 --output-dir=/home/outputdir \ --credentials-requests-dir=/home/credentials-requests \ --workload-identity-pool abcde-20230525-4bac2781 \ --workload-identity-provider abcde-20230525-4bac2781 \ --project openshift-gcp-devel Apply the secrets generated in the manifests directory of your cluster by running the following command: USD ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"gcp-credentials"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with GCP workload identity credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output spec: containers: - args: ... volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token ... - mountPath: /.config/gcloud name: cloud-credentials ... volumes: - name: bound-sa-token projected: ... sources: - serviceAccountToken: audience: openshift ... path: token - name: cloud-credentials secret: ... items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials Additional resources Configuring the Cloud Credential Operator utility Manual mode with short-term credentials for components Default behavior of the Cloud Credential Operator 9.7. Configuring an ACME issuer The cert-manager Operator for Red Hat OpenShift supports using Automated Certificate Management Environment (ACME) CA servers, such as Let's Encrypt , to issue certificates. Explicit credentials are configured by specifying the secret details in the Issuer API object. Ambient credentials are extracted from the environment, metadata services, or local files which are not explicitly configured in the Issuer API object. Note The Issuer object is namespace scoped. It can only issue certificates from the same namespace. You can also use the ClusterIssuer object to issue certificates across all namespaces in the cluster. Example YAML file that defines the ClusterIssuer object apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme: ... Note By default, you can use the ClusterIssuer object with ambient credentials. To use the Issuer object with ambient credentials, you must enable the --issuer-ambient-credentials setting for the cert-manager controller. 9.7.1. About ACME issuers The ACME issuer type for the cert-manager Operator for Red Hat OpenShift represents an Automated Certificate Management Environment (ACME) certificate authority (CA) server. ACME CA servers rely on a challenge to verify that a client owns the domain names that the certificate is being requested for. If the challenge is successful, the cert-manager Operator for Red Hat OpenShift can issue the certificate. If the challenge fails, the cert-manager Operator for Red Hat OpenShift does not issue the certificate. Note Private DNS zones are not supported with Let's Encrypt and internet ACME servers. 9.7.1.1. Supported ACME challenges types The cert-manager Operator for Red Hat OpenShift supports the following challenge types for ACME issuers: HTTP-01 With the HTTP-01 challenge type, you provide a computed key at an HTTP URL endpoint in your domain. If the ACME CA server can get the key from the URL, it can validate you as the owner of the domain. For more information, see HTTP01 in the upstream cert-manager documentation. Note HTTP-01 requires that the Let's Encrypt servers can access the route of the cluster. If an internal or private cluster is behind a proxy, the HTTP-01 validations for certificate issuance fail. The HTTP-01 challenge is restricted to port 80. For more information, see HTTP-01 challenge (Let's Encrypt). DNS-01 With the DNS-01 challenge type, you provide a computed key at a DNS TXT record. If the ACME CA server can get the key by DNS lookup, it can validate you as the owner of the domain. For more information, see DNS01 in the upstream cert-manager documentation. 9.7.1.2. Supported DNS-01 providers The cert-manager Operator for Red Hat OpenShift supports the following DNS-01 providers for ACME issuers: Amazon Route 53 Azure DNS Note The cert-manager Operator for Red Hat OpenShift does not support using Microsoft Entra ID pod identities to assign a managed identity to a pod. Google Cloud DNS Webhook Red Hat tests and supports DNS providers using an external webhook with cert-manager on OpenShift Container Platform. The following DNS providers are tested and supported with OpenShift Container Platform: cert-manager-webhook-ibmcis Note Using a DNS provider that is not listed might work with OpenShift Container Platform, but the provider was not tested by Red Hat and therefore is not supported by Red Hat. 9.7.2. Configuring an ACME issuer to solve HTTP-01 challenges You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve HTTP-01 challenges. This procedure uses Let's Encrypt as the ACME CA server. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a service that you want to expose. In this procedure, the service is named sample-workload . Procedure Create an ACME cluster issuer. Create a YAML file that defines the ClusterIssuer object: Example acme-cluster-issuer.yaml file apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4 1 Provide a name for the cluster issuer. 2 Replace <secret_private_key> with the name of secret to store the ACME account private key in. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Specify the Ingress class. Optional: If you create the object without specifying ingressClassName , use the following command to patch the existing ingress: USD oc patch ingress/<ingress-name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}' -n <namespace> Create the ClusterIssuer object by running the following command: USD oc create -f acme-cluster-issuer.yaml Create an Ingress to expose the service of the user workload. Create a YAML file that defines a Namespace object: Example namespace.yaml file apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1 1 Specify the namespace for the Ingress. Create the Namespace object by running the following command: USD oc create -f namespace.yaml Create a YAML file that defines the Ingress object: Example ingress.yaml file apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80 1 Specify the name of the Ingress. 2 Specify the namespace that you created for the Ingress. 3 Specify the cluster issuer that you created. 4 Specify the Ingress class. 5 Replace <hostname> with the Subject Alternative Name (SAN) to be associated with the certificate. This name is used to add DNS names to the certificate. 6 Specify the secret that stores the certificate. 7 Replace <hostname> with the hostname. You can use the <host_name>.<cluster_ingress_domain> syntax to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. For example, you might use apps.<cluster_base_domain> . Otherwise, you must ensure that a DNS record exists for the chosen hostname. 8 Specify the name of the service to expose. This example uses a service named sample-workload . Create the Ingress object by running the following command: USD oc create -f ingress.yaml 9.7.3. Configuring an ACME issuer by using explicit credentials for AWS Route53 You can use cert-manager Operator for Red Hat OpenShift to set up an Automated Certificate Management Environment (ACME) issuer to solve DNS-01 challenges by using explicit credentials on AWS. This procedure uses Let's Encrypt as the ACME certificate authority (CA) server and shows how to solve DNS-01 challenges with Amazon Route 53. Prerequisites You must provide the explicit accessKeyID and secretAccessKey credentials. For more information, see Route53 in the upstream cert-manager documentation. Note You can use Amazon Route 53 with explicit credentials in an OpenShift Container Platform cluster that is not running on AWS. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Create a secret to store your AWS credentials in by running the following command: USD oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \ 1 -n my-issuer-namespace 1 Replace <aws_secret_access_key> with your AWS secret access key. Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: "<email_address>" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: "aws-secret" 9 key: "awsSecretAccessKey" 10 1 Provide a name for the issuer. 2 Specify the namespace that you created for the issuer. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <email_address> with your email address. 5 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 6 Replace <aws_key_id> with your AWS key ID. 7 Replace <hosted_zone_id> with your hosted zone ID. 8 Replace <region_name> with the AWS region name. For example, us-east-1 . 9 Specify the name of the secret you created. 10 Specify the key in the secret you created that stores your AWS secret access key. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.4. Configuring an ACME issuer by using ambient credentials on AWS You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using ambient credentials on AWS. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Amazon Route 53. Prerequisites If your cluster is configured to use the AWS Security Token Service (STS), you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster section. If your cluster does not use the AWS STS, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS section. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Modify the CertManager resource to add the --issuer-ambient-credentials argument: USD oc patch certmanager/cluster \ --type=merge \ -p='{"spec":{"controllerConfig":{"overrideArgs":["--issuer-ambient-credentials"]}}}' Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: "<email_address>" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1 1 Provide a name for the issuer. 2 Specify the namespace that you created for the issuer. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <email_address> with your email address. 5 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 6 Replace <hosted_zone_id> with your hosted zone ID. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.5. Configuring an ACME issuer by using explicit credentials for GCP Cloud DNS You can use the cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using explicit credentials on GCP. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Google CloudDNS. Prerequisites You have set up Google Cloud service account with a desired role for Google CloudDNS. For more information, see Google CloudDNS in the upstream cert-manager documentation. Note You can use Google CloudDNS with explicit credentials in an OpenShift Container Platform cluster that is not running on GCP. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project my-issuer-namespace Create a secret to store your GCP credentials by running the following command: USD oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7 1 Provide a name for the issuer. 2 Replace <issuer_namespace> with your issuer namespace. 3 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 4 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 5 Replace <project_id> with the name of the GCP project that contains the Cloud DNS zone. 6 Specify the name of the secret you created. 7 Specify the key in the secret you created that stores your GCP secret access key. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.6. Configuring an ACME issuer by using ambient credentials on GCP You can use the cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using ambient credentials on GCP. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Google CloudDNS. Prerequisites If your cluster is configured to use GCP Workload Identity, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity section. If your cluster does not use GCP Workload Identity, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP section. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Modify the CertManager resource to add the --issuer-ambient-credentials argument: USD oc patch certmanager/cluster \ --type=merge \ -p='{"spec":{"controllerConfig":{"overrideArgs":["--issuer-ambient-credentials"]}}}' Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4 1 Provide a name for the issuer. 2 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <gcp_project_id> with the name of the GCP project that contains the Cloud DNS zone. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.7. Configuring an ACME issuer by using explicit credentials for Microsoft Azure DNS You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using explicit credentials on Microsoft Azure. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Azure DNS. Prerequisites You have set up a service principal with desired role for Azure DNS. For more information, see Azure DNS in the upstream cert-manager documentation. Note You can follow this procedure for an OpenShift Container Platform cluster that is not running on Microsoft Azure. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project my-issuer-namespace Create a secret to store your Azure credentials in by running the following command: USD oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \ 1 2 3 -n my-issuer-namespace 1 Replace <secret_name> with your secret name. 2 Replace <azure_secret_access_key_name> with your Azure secret access key name. 3 Replace <azure_secret_access_key_value> with your Azure secret key. Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud 1 Provide a name for the issuer. 2 Replace <issuer_namespace> with your issuer namespace. 3 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 4 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 5 Replace <azure_client_id> with your Azure client ID. 6 Replace <secret_name> with a name of the client secret. 7 Replace <azure_secret_access_key_name> with the client secret key name. 8 Replace <azure_subscription_id> with your Azure subscription ID. 9 Replace <azure_tenant_id> with your Azure tenant ID. 10 Replace <azure_dns_zone_resource_group> with the name of the Azure DNS zone resource group. 11 Replace <azure_dns_zone> with the name of Azure DNS zone. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.8. Additional resources Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP 9.8. Configuring certificates with an issuer By using the cert-manager Operator for Red Hat OpenShift, you can manage certificates, handling tasks such as renewal and issuance, for workloads within the cluster, as well as components interacting externally to the cluster. 9.8.1. Creating certificates for user workloads Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - "<domain_name>" 5 issuerRef: name: <issuer_name> 6 kind: Issuer 1 Provide a name for the certificate. 2 Specify the namespace of the issuer. 3 Specify the common name (CN). 4 Specify the name of the secret to create that contains the certificate. 5 Specify the domain name. 6 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n <issuer_namespace> Once certificate is in Ready status, workloads on your cluster can start using the generated certificate secret. 9.8.2. Creating certificates for the API server Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.13.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: "api.<cluster_base_domain>" 2 secretName: <secret_name> 3 dnsNames: - "api.<cluster_base_domain>" 4 issuerRef: name: <issuer_name> 5 kind: Issuer 1 Provide a name for the certificate. 2 Specify the common name (CN). 3 Specify the name of the secret to create that contains the certificate. 4 Specify the DNS name of the API server. 5 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Add the API server named certificate. For more information, see "Adding an API server named certificate" section in the "Additional resources" section. Note To ensure the certificates are updated, run the oc login command again after the certificate is created. Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n openshift-config Once certificate is in Ready status, API server on your cluster can start using the generated certificate secret. 9.8.3. Creating certificates for the Ingress Controller Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.13.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: "apps.<cluster_base_domain>" 2 secretName: <secret_name> 3 dnsNames: - "apps.<cluster_base_domain>" 4 - "*.apps.<cluster_base_domain>" 5 issuerRef: name: <issuer_name> 6 kind: Issuer 1 Provide a name for the certificate. 2 Specify the common name (CN). 3 Specify the name of the secret to create that contains the certificate. 4 5 Specify the DNS name of the ingress. 6 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Replace the default ingress certificate. For more information, see "Replacing the default ingress certificate" section in the "Additional resources" section. Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n openshift-ingress Once certificate is in Ready status, Ingress Controller on your cluster can start using the generated certificate secret. 9.8.4. Additional resources Configuring an issuer Supported issuer types Configuring an ACME issuer Adding an API server named certificate Replacing the default ingress certificate 9.9. Securing routes with the cert-manager Operator for Red Hat OpenShift In the OpenShift Container Platform, the route API is extended to provide a configurable option to reference TLS certificates via secrets. With the Creating a route with externally managed certificate Technology Preview feature enabled, you can minimize errors from manual intervention, streamline the certificate management process, and enable the OpenShift Container Platform router to promptly serve the referenced certificate. Important Securing routes with the cert-manager Operator for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.9.1. Configuring certificates to secure routes in your cluster The following steps demonstrate the process of utilizing the cert-manager Operator for Red Hat OpenShift with the Let's Encrypt ACME HTTP-01 challenge type to secure the route resources in your OpenShift Container Platform cluster. Prerequisites You have installed version 1.14.0 or later of the cert-manager Operator for Red Hat OpenShift. You have enabled the RouteExternalCertificate feature gate. You have the create and update permissions on the routes/custom-host sub-resource. You have a Service resource that you want to expose. Procedure Create a Route resource for your Service resource using edge TLS termination and a custom hostname by running the following command. The hostname will be used while creating a Certificate resource in the following steps. USD oc create route edge <route_name> \ 1 --service=<service_name> \ 2 --hostname=<hostname> \ 3 --namespace=<namespace> 4 1 Specify your route's name. 2 Specify the service you want to expose. 3 Specify the hostname of your route. 4 Specify the namespace where your route is located. Create an Issuer to configure the HTTP-01 solver by running the following command. For other ACME issuer types, see "Configuring ACME an issuer". Example Issuer.yaml file USD oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-acme namespace: <namespace> 1 spec: acme: server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-acme-account-key solvers: - http01: ingress: ingressClassName: openshift-default EOF 1 Specify the namespace where the Issuer is located. It should be the same as your route's namespace. Create a Certificate object for the route by running the following command. The secretName specifies the TLS secret that is going to be issued and managed by cert-manager and will also be referenced in your route in the following steps. USD oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-route-cert namespace: <namespace> 1 spec: commonName: <hostname> 2 dnsNames: - <hostname> 3 usages: - server auth issuerRef: kind: Issuer name: letsencrypt-acme secretName: <secret_name> 4 EOF 1 Specify the namespace where the Certificate resource is located. It should be the same as your route's namespace. 2 Specify the certificate's common name using the hostname of the route. 3 Add the hostname of your route to the certificate's DNS names. 4 Specify the name of the secret that contains the certificate. Create a Role to provide the router service account permissions to read the referenced secret by using the following command: USD oc create role secret-reader \ --verb=get,list,watch \ --resource=secrets \ --resource-name=<secret_name> \ 1 --namespace=<namespace> 2 1 Specify the name of the secret that you want to grant access to. It should be consistent with your secretName specified in the Certificate resource. 2 Specify the namespace where both your secret and route are located. Create a RoleBinding resource to bind the router service account with the newly created Role resource by using the following command: USD oc create rolebinding secret-reader-binding \ --role=secret-reader \ --serviceaccount=openshift-ingress:router \ --namespace=<namespace> 1 1 Specify the namespace where both your secret and route are located. Update your route's .spec.tls.externalCertificate field to reference the previously created secret and use the certificate issued by cert-manager by using the following command: USD oc patch route <route_name> \ 1 -n <namespace> \ 2 --type=merge \ -p '{"spec":{"tls":{"externalCertificate":{"name":"<secret_name>"}}}}' 3 1 Specify the route name. 2 Specify the namespace where both your secret and route are located. 3 Specify the name of the secret that contains the certificate. Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -n <namespace> 1 USD oc get secret -n <namespace> 2 1 2 Specify the namespace where both your secret and route reside. Verify that the router is using the referenced external certificate by running the following command. The command should return with the status code 200 OK . USD curl -IsS https://<hostname> 1 1 Specify the hostname of your route. Verify the server certificate's subject , subjectAltName and issuer are all as expected from the curl verbose outputs by running the following command: USD curl -v https://<hostname> 1 1 Specify the hostname of your route. The route is now successfully secured by the certificate from the referenced secret issued by cert-manager. cert-manager will automatically manage the certificate's lifecycle. 9.9.2. Additional resources Creating a route with externally managed certificate Configuring an ACME issuer 9.10. Integrating the cert-manager Operator for Red Hat OpenShift with Istio-CSR Important Istio-CSR integration for cert-manager Operator for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The cert-manager Operator for Red Hat OpenShift provides enhanced support for securing workloads and control plane components in Red Hat OpenShift Service Mesh or Istio. This includes support for certificates enabling mutual TLS (mTLS), which are signed, delivered, and renewed using cert-manager issuers. You can secure Istio workloads and control plane components by using the cert-manager Operator for Red Hat OpenShift managed Istio-CSR agent. With this Istio-CSR integration, Istio can now obtain certificates from the cert-manager Operator for Red Hat OpenShift, simplifying security and certificate management. 9.10.1. Installing the Istio-CSR agent through cert-manager Operator for Red Hat OpenShift 9.10.1.1. Enabling the Istio-CSR feature Use this procedure to enable the Istio-CSR feature in cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Update the deployment for the cert-manager Operator for Red Hat OpenShift to use the config map by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"UNSUPPORTED_ADDON_FEATURES","value":"IstioCSR=true"}]}}}' Verification Verify that the deployments have finished rolling out by running the following command: USD oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator Example output deployment "cert-manager-operator-controller-manager" successfully rolled out 9.10.1.2. Creating a root CA issuer for the Istio-CSR agent Use this procedure to create the root CA issuer for Istio-CSR agent. Note Other supported issuers can be used, except for the ACME issuer, which is not supported. For more information, see "cert-manager Operator for Red Hat OpenShift issuer providers". Create a YAML file, for example, issuer.yaml , that defines the Issuer and Certificate objects: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca 1 3 Specify the Issuer or ClusterIssuer . 2 4 Specify the name of the Istio project. Verification Verify that the Issuer is created and ready to use by running the following command: USD oc get issuer istio-ca -n <istio_project_name> Example output NAME READY AGE istio-ca True 3m Additional resources cert-manager Operator for Red Hat OpenShift issuer providers 9.10.1.3. Creating the IstioCSR custom resource Use this procedure to install the Istio-CSR agent through cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have enabled the Istio-CSR feature. You have created the Issuer or ClusterIssuer resources required for generating certificates for the Istio-CSR agent. Note If you are using Issuer resource, create the Issuer and Certificate resources in the Red Hat OpenShift Service Mesh or Istiod namespace. Certificate requests are generated in the same namespace, and role-based access control (RBAC) is configured accordingly. Procedure Create a new project for installing Istio-CSR by running the following command. You can use an existing project and skip this step. USD oc new-project <istio_csr_project_name> Create the IstioCSR custom resource to enable Istio-CSR agent managed by the cert-manager Operator for Red Hat OpenShift for processing Istio workload and control plane certificate signing requests. Note Only one IstioCSR custom resource (CR) is supported at a time. If multiple IstioCSR CRs are created, only one will be active. Use the status sub-resource of IstioCSR to check if a resource is unprocessed. If multiple IstioCSR CRs are created simultaneously, none will be processed. If multiple IstioCSR CRs are created sequentially, only the first one will be processed. To prevent new requests from being rejected, delete any unprocessed IstioCSR CRs. The Operator does not automatically remove objects created for IstioCSR . If an active IstioCSR resource is deleted and a new one is created in a different namespace without removing the deployments, multiple istio-csr deployments may remain active. This behavior is not recommended and is not supported. Create a YAML file, for example, istiocsr.yaml , that defines the IstioCSR object: Example IstioCSR.yaml file apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system 1 Specify the Issuer or ClusterIssuer name. It should be the same name as the CA issuer defined in the issuer.yaml file. 2 Specify the Issuer or ClusterIssuer kind. It should be the same kind as the CA issuer defined in the issuer.yaml file. Create the IstioCSR custom resource by running the following command: USD oc create -f IstioCSR.yaml Verification Verify that the Istio-CSR deployment is ready by running the following command: USD oc get deployment -n <istio_csr_project_name> Example output NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s Verify that the Istio-CSR pods are running by running the following command: USD oc get pod -n <istio_csr_project_name> Example output NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s Verify that the Istio-CSR pod is not reporting any errors in the logs by running the following command: USD oc -n <istio_csr_project_name> logs <istio_csr_pod_name> Verify that the cert-manager Operator for Red Hat OpenShift pod is not reporting any errors by running the following command: USD oc -n cert-manager-operator logs <cert_manager_operator_pod_name> 9.10.2. Uninstalling the Istio-CSR agent managed by cert-manager Operator for Red Hat OpenShift Use this procedure to uninstall the Istio-CSR agent managed by cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have enabled the Istio-CSR feature. You have created the IstioCSR custom resource. Procedure Remove the IstioCSR custom resource by running the following command: USD oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default Remove related resources: Important To avoid disrupting any Red Hat OpenShift Service Mesh or Istio components, ensure that no component is referencing the Istio-CSR service or the certificates issued for Istio before removing the following resources. List the cluster scoped-resources by running the following command and save the names of the listed resources for later reference: USD oc get clusterrolebindings,clusterroles -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" List the resources in Istio-csr deployed namespace by running the following command and save the names of the listed resources for later reference: USD oc get certificate,deployments,services,serviceaccounts -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name> List the resources in Red Hat OpenShift Service Mesh or Istio deployed namespaces by running the following command and save the names of the listed resources for later reference: USD oc get roles,rolebindings -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name> For each resource listed in steps, delete the resource by running the following command: USD oc -n <istio_csr_project_name> delete <resource_type>/<resource_name> Repeat this process until all of the related resources have been deleted. 9.10.3. Upgrading the cert-manager Operator for Red Hat OpenShift with Istio-CSR feature enabled When the Istio-CSR TechPreview feature gate is enabled, the Operator cannot be upgraded. To use to the available version, you must uninstall the cert-manager Operator for Red Hat OpenShift and remove all Istio-CSR resources before reinstalling it. 9.11. Monitoring cert-manager Operator for Red Hat OpenShift You can expose controller metrics for the cert-manager Operator for Red Hat OpenShift in the format provided by the Prometheus Operator. 9.11.1. Enabling monitoring by using a service monitor for the cert-manager Operator for Red Hat OpenShift You can enable monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift by using a service monitor to perform the custom metrics scraping. Prerequisites You have access to the cluster with cluster-admin privileges. The cert-manager Operator for Red Hat OpenShift is installed. Procedure Add the label to enable cluster monitoring by running the following command: USD oc label namespace cert-manager openshift.io/cluster-monitoring=true Create a service monitor: Create a YAML file that defines the Role , RoleBinding , and ServiceMonitor objects: Example monitoring.yaml file apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager Create the Role , RoleBinding , and ServiceMonitor objects by running the following command: USD oc create -f monitoring.yaml Additional resources Setting up metrics collection for user-defined projects 9.11.2. Querying metrics for the cert-manager Operator for Red Hat OpenShift After you have enabled monitoring for the cert-manager Operator for Red Hat OpenShift, you can query its metrics by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the cert-manager Operator for Red Hat OpenShift. You have enabled monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift. Procedure From the OpenShift Container Platform web console, navigate to Observe Metrics . Add a query by using one of the following formats: Specify the endpoints: {instance="<endpoint>"} 1 1 Replace <endpoint> with the value of the endpoint for the cert-manager service. You can find the endpoint value by running the following command: oc describe service cert-manager -n cert-manager . Specify the tcp-prometheus-servicemonitor port: {endpoint="tcp-prometheus-servicemonitor"} 9.12. Configuring log levels for cert-manager and the cert-manager Operator for Red Hat OpenShift To troubleshoot issues with the cert-manager components and the cert-manager Operator for Red Hat OpenShift, you can configure the log level verbosity. Note To use different log levels for different cert-manager components, see Customizing cert-manager Operator API fields . 9.12.1. Setting a log level for cert-manager You can set a log level for cert-manager to determine the verbosity of log messages. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager.operator cluster Set the log level value by editing the spec.logLevel section: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager ... spec: logLevel: <log_level> 1 1 The valid log level values for the CertManager resource are Normal , Debug , Trace , and TraceAll . To audit logs and perform common operations when there are no issues, set logLevel to Normal . To troubleshoot a minor issue by viewing verbose logs, set logLevel to Debug . To troubleshoot a major issue by viewing more verbose logs, you can set logLevel to Trace . To troubleshoot serious issues, set logLevel to TraceAll . The default logLevel is Normal . Note TraceAll generates huge amount of logs. After setting logLevel to TraceAll , you might experience performance issues. Save your changes and quit the text editor to apply your changes. After applying the changes, the verbosity level for the cert-manager components controller, CA injector, and webhook is updated. 9.12.2. Setting a log level for the cert-manager Operator for Red Hat OpenShift You can set a log level for the cert-manager Operator for Red Hat OpenShift to determine the verbosity of the operator log messages. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Update the subscription object for cert-manager Operator for Red Hat OpenShift to provide the verbosity level for the operator logs by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"OPERATOR_LOG_LEVEL","value":"v"}]}}}' 1 1 Replace v with the desired log level number. The valid values for v can range from 1`to `10 . The default value is 2 . Verification The cert-manager Operator pod is redeployed. Verify that the log level of the cert-manager Operator for Red Hat OpenShift is updated by running the following command: USD oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container Example output # deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 # deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9 Verify that the log level of the cert-manager Operator for Red Hat OpenShift is updated by running the oc logs command: USD oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator 9.12.3. Additional resources Customizing cert-manager Operator API fields 9.13. Uninstalling the cert-manager Operator for Red Hat OpenShift You can remove the cert-manager Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 9.13.1. Uninstalling the cert-manager Operator for Red Hat OpenShift You can uninstall the cert-manager Operator for Red Hat OpenShift by using the web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. The cert-manager Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the cert-manager Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the cert-manager Operator for Red Hat OpenShift entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 9.13.2. Removing cert-manager Operator for Red Hat OpenShift resources Once you have uninstalled the cert-manager Operator for Red Hat OpenShift, you have the option to eliminate its associated resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove the deployments of the cert-manager components, such as cert-manager , cainjector , and webhook , present in the cert-manager namespace. Click the Project drop-down menu to see a list of all available projects, and select the cert-manager project. Navigate to Workloads Deployments . Select the deployment that you want to delete. Click the Actions drop-down menu, and select Delete Deployment to see a confirmation dialog box. Click Delete to delete the deployment. Alternatively, delete deployments of the cert-manager components such as cert-manager , cainjector and webhook present in the cert-manager namespace by using the command-line interface (CLI). USD oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager Optional: Remove the custom resource definitions (CRDs) that were installed by the cert-manager Operator for Red Hat OpenShift: Navigate to Administration CustomResourceDefinitions . Enter certmanager in the Name field to filter the CRDs. Click the Options menu to each of the following CRDs, and select Delete Custom Resource Definition : Certificate CertificateRequest CertManager ( operator.openshift.io ) Challenge ClusterIssuer Issuer Order Optional: Remove the cert-manager-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the cert-manager-operator and select Delete Namespace . In the confirmation dialog, enter cert-manager-operator in the field and click Delete . | [
"oc get pods -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s",
"oc new-project cert-manager-operator",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - \"cert-manager-operator\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: [] spec: {}",
"oc create -f operatorGroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic",
"oc create -f subscription.yaml",
"oc get subscription -n cert-manager-operator",
"NAME PACKAGE SOURCE CHANNEL openshift-cert-manager-operator openshift-cert-manager-operator redhat-operators stable-v1",
"oc get csv -n cert-manager-operator",
"NAME DISPLAY VERSION REPLACES PHASE cert-manager-operator.v1.13.0 cert-manager Operator for Red Hat OpenShift 1.13.0 cert-manager-operator.v1.12.1 Succeeded",
"oc get pods -n cert-manager-operator",
"NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-695b4d46cb-r4hld 2/2 Running 0 7m4s",
"oc get pods -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-58b7f649c4-dp6l4 1/1 Running 0 7m1s cert-manager-cainjector-5565b8f897-gx25h 1/1 Running 0 7m37s cert-manager-webhook-9bc98cbdd-f972x 1/1 Running 0 7m40s",
"oc create configmap trusted-ca -n cert-manager",
"oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}}'",
"oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && rollout status deployment/cert-manager -n cert-manager && rollout status deployment/cert-manager-webhook -n cert-manager && rollout status deployment/cert-manager-cainjector -n cert-manager",
"deployment \"cert-manager-operator-controller-manager\" successfully rolled out deployment \"cert-manager\" successfully rolled out deployment \"cert-manager-webhook\" successfully rolled out deployment \"cert-manager-cainjector\" successfully rolled out",
"oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}",
"[{\"mountPath\":\"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt\",\"name\":\"trusted-ca\",\"subPath\":\"ca-bundle.crt\"}]",
"oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes}",
"[{\"configMap\":{\"defaultMode\":420,\"name\":\"trusted-ca\"},\"name\":\"trusted-ca\"}]",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s",
"oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml",
"env: - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS>",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<server_address>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8",
"oc get pods -n cert-manager -o yaml",
"metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager spec: containers: - args: - --v=4",
"oc get certificate",
"NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--enable-certificate-owner-ref'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml",
"metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager spec: containers: - args: - --enable-certificate-owner-ref",
"oc get deployment -n cert-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m",
"oc get deployment -n cert-manager -o yaml",
"metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3",
"oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 \"",
"certmanager.operator.openshift.io/cluster patched",
"oc get deployment -n cert-manager -o yaml",
"metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi",
"oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule\" 6",
"oc get pods -n cert-manager -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none>",
"oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}{.spec.template.spec.nodeSelector}{\"\\n\"}{.spec.template.spec.tolerations}{\"\\n\\n\"}{end}'",
"cert-manager {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-cainjector {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-webhook {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}]",
"oc get events -n cert-manager --field-selector reason=Scheduled",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager",
"oc create -f sample-credential-request.yaml",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"aws-creds\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: - mountPath: /.aws name: cloud-credentials volumes: - name: cloud-credentials secret: secretName: aws-creds",
"mkdir credentials-request",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager",
"ccoctl aws create-iam-roles --name <user_defined_name> --region=<aws_region> --credentials-requests-dir=<path_to_credrequests_dir> --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir>",
"2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds",
"oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s",
"oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list",
"pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager",
"oc create -f sample-credential-request.yaml",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: volumeMounts: - mountPath: /.config/gcloud name: cloud-credentials . volumes: - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials",
"mkdir credentials-request",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager",
"ccoctl gcp create-service-accounts --name <user_defined_name> --output-dir=<path_to_output_dir> --credentials-requests-dir=<path_to_credrequests_dir> --workload-identity-pool <workload_identity_pool> --workload-identity-provider <workload_identity_provider> --project <gcp_project_id>",
"ccoctl gcp create-service-accounts --name abcde-20230525-4bac2781 --output-dir=/home/outputdir --credentials-requests-dir=/home/credentials-requests --workload-identity-pool abcde-20230525-4bac2781 --workload-identity-provider abcde-20230525-4bac2781 --project openshift-gcp-devel",
"ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token - mountPath: /.config/gcloud name: cloud-credentials volumes: - name: bound-sa-token projected: sources: - serviceAccountToken: audience: openshift path: token - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials",
"apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme:",
"apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4",
"oc patch ingress/<ingress-name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}' -n <namespace>",
"oc create -f acme-cluster-issuer.yaml",
"apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1",
"oc create -f namespace.yaml",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80",
"oc create -f ingress.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \\ 1 -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: \"aws-secret\" 9 key: \"awsSecretAccessKey\" 10",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project my-issuer-namespace",
"oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project my-issuer-namespace",
"oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \\ 1 2 3 -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud",
"oc create -f issuer.yaml",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - \"<domain_name>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n <issuer_namespace>",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: \"api.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"api.<cluster_base_domain>\" 4 issuerRef: name: <issuer_name> 5 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n openshift-config",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: \"apps.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"apps.<cluster_base_domain>\" 4 - \"*.apps.<cluster_base_domain>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n openshift-ingress",
"oc create route edge <route_name> \\ 1 --service=<service_name> \\ 2 --hostname=<hostname> \\ 3 --namespace=<namespace> 4",
"oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-acme namespace: <namespace> 1 spec: acme: server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-acme-account-key solvers: - http01: ingress: ingressClassName: openshift-default EOF",
"oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-route-cert namespace: <namespace> 1 spec: commonName: <hostname> 2 dnsNames: - <hostname> 3 usages: - server auth issuerRef: kind: Issuer name: letsencrypt-acme secretName: <secret_name> 4 EOF",
"oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret_name> \\ 1 --namespace=<namespace> 2",
"oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<namespace> 1",
"oc patch route <route_name> \\ 1 -n <namespace> \\ 2 --type=merge -p '{\"spec\":{\"tls\":{\"externalCertificate\":{\"name\":\"<secret_name>\"}}}}' 3",
"oc get certificate -n <namespace> 1 oc get secret -n <namespace> 2",
"curl -IsS https://<hostname> 1",
"curl -v https://<hostname> 1",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"UNSUPPORTED_ADDON_FEATURES\",\"value\":\"IstioCSR=true\"}]}}}'",
"oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator",
"deployment \"cert-manager-operator-controller-manager\" successfully rolled out",
"apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca",
"oc get issuer istio-ca -n <istio_project_name>",
"NAME READY AGE istio-ca True 3m",
"oc new-project <istio_csr_project_name>",
"apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system",
"oc create -f IstioCSR.yaml",
"oc get deployment -n <istio_csr_project_name>",
"NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s",
"oc get pod -n <istio_csr_project_name>",
"NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s",
"oc -n <istio_csr_project_name> logs <istio_csr_pod_name>",
"oc -n cert-manager-operator logs <cert_manager_operator_pod_name>",
"oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default",
"oc get clusterrolebindings,clusterroles -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\"",
"oc get certificate,deployments,services,serviceaccounts -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>",
"oc get roles,rolebindings -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>",
"oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>",
"oc label namespace cert-manager openshift.io/cluster-monitoring=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager",
"oc create -f monitoring.yaml",
"{instance=\"<endpoint>\"} 1",
"{endpoint=\"tcp-prometheus-servicemonitor\"}",
"oc edit certmanager.operator cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager spec: logLevel: <log_level> 1",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"OPERATOR_LOG_LEVEL\",\"value\":\"v\"}]}}}' 1",
"oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container",
"deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9",
"oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator",
"oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift |
6.5. Removing Lost Physical Volumes from a Volume Group | 6.5. Removing Lost Physical Volumes from a Volume Group If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command. You should run the vgreduce command with the --test argument to verify what you will be destroying. Like most LVM operations, the vgreduce command is reversible if you immediately use the vgcfgrestore command to restore the volume group metadata to its state. For example, if you used the --removemissing argument of the vgreduce command without the --test argument and find you have removed logical volumes you wanted to keep, you can still replace the physical volume and use another vgcfgrestore command to return the volume group to its state. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lost_pv_remove_from_vg |
Getting Started Guide | Getting Started Guide Red Hat JBoss Data Virtualization 6.4 Learn how to perform a basic installation of Red Hat JBoss Data Virtualization and perform some rudimentary tasks with the product. David Le Sage [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/getting_started_guide/index |
Chapter 3. PodDisruptionBudget [policy/v1] | Chapter 3. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. status object PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system. 3.1.1. .spec Description PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Type object Property Type Description maxUnavailable IntOrString An eviction is allowed if at most "maxUnavailable" pods selected by "selector" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with "minAvailable". minAvailable IntOrString An eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace. unhealthyPodEvictionPolicy string UnhealthyPodEvictionPolicy defines the criteria for when unhealthy pods should be considered for eviction. Current implementation considers healthy pods, as pods that have status.conditions item with type="Ready",status="True". Valid policies are IfHealthyBudget and AlwaysAllow. If no policy is specified, the default behavior will be used, which corresponds to the IfHealthyBudget policy. IfHealthyBudget policy means that running pods (status.phase="Running"), but not yet healthy can be evicted only if the guarded application is not disrupted (status.currentHealthy is at least equal to status.desiredHealthy). Healthy pods will be subject to the PDB for eviction. AlwaysAllow policy means that all running pods (status.phase="Running"), but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met. This means perspective running pods of a disrupted application might not get a chance to become healthy. Healthy pods will be subject to the PDB for eviction. Additional policies may be added in the future. Clients making eviction decisions should disallow eviction of unhealthy pods if they encounter an unrecognized policy in this field. This field is alpha-level. The eviction API uses this field when the feature gate PDBUnhealthyPodEvictionPolicy is enabled (disabled by default). 3.1.2. .status Description PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system. Type object Required disruptionsAllowed currentHealthy desiredHealthy expectedPods Property Type Description conditions array (Condition) Conditions contain conditions for PDB. The disruption controller sets the DisruptionAllowed condition. The following are known values for the reason field (additional reasons could be added in the future): - SyncFailed: The controller encountered an error and wasn't able to compute the number of allowed disruptions. Therefore no disruptions are allowed and the status of the condition will be False. - InsufficientPods: The number of pods are either at or below the number required by the PodDisruptionBudget. No disruptions are allowed and the status of the condition will be False. - SufficientPods: There are more pods than required by the PodDisruptionBudget. The condition will be True, and the number of allowed disruptions are provided by the disruptionsAllowed property. currentHealthy integer current number of healthy pods desiredHealthy integer minimum desired number of healthy pods disruptedPods object (Time) DisruptedPods contains information about pods whose eviction was processed by the API server eviction subresource handler but has not yet been observed by the PodDisruptionBudget controller. A pod will be in this map from the time when the API server processed the eviction request to the time when the pod is seen by PDB controller as having been marked for deletion (or after a timeout). The key in the map is the name of the pod and the value is the time when the API server processed the eviction request. If the deletion didn't occur and a pod is still there it will be removed from the list automatically by PodDisruptionBudget controller after some time. If everything goes smooth this map should be empty for the most of the time. Large number of entries in the map may indicate problems with pod deletions. disruptionsAllowed integer Number of pod disruptions that are currently allowed. expectedPods integer total number of pods counted by this disruption budget observedGeneration integer Most recent generation observed when updating this PDB status. DisruptionsAllowed and other status information is valid only if observedGeneration equals to PDB's object generation. 3.2. API endpoints The following API endpoints are available: /apis/policy/v1/poddisruptionbudgets GET : list or watch objects of kind PodDisruptionBudget /apis/policy/v1/watch/poddisruptionbudgets GET : watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets DELETE : delete collection of PodDisruptionBudget GET : list or watch objects of kind PodDisruptionBudget POST : create a PodDisruptionBudget /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets GET : watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name} DELETE : delete a PodDisruptionBudget GET : read the specified PodDisruptionBudget PATCH : partially update the specified PodDisruptionBudget PUT : replace the specified PodDisruptionBudget /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets/{name} GET : watch changes to an object of kind PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name}/status GET : read status of the specified PodDisruptionBudget PATCH : partially update status of the specified PodDisruptionBudget PUT : replace status of the specified PodDisruptionBudget 3.2.1. /apis/policy/v1/poddisruptionbudgets Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind PodDisruptionBudget Table 3.2. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudgetList schema 401 - Unauthorized Empty 3.2.2. /apis/policy/v1/watch/poddisruptionbudgets Table 3.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. Table 3.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets Table 3.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PodDisruptionBudget Table 3.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.8. Body parameters Parameter Type Description body DeleteOptions schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PodDisruptionBudget Table 3.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudgetList schema 401 - Unauthorized Empty HTTP method POST Description create a PodDisruptionBudget Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 202 - Accepted PodDisruptionBudget schema 401 - Unauthorized Empty 3.2.4. /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets Table 3.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name} Table 3.18. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget namespace string object name and auth scope, such as for teams and projects Table 3.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PodDisruptionBudget Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.21. Body parameters Parameter Type Description body DeleteOptions schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodDisruptionBudget Table 3.23. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodDisruptionBudget Table 3.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.25. Body parameters Parameter Type Description body Patch schema Table 3.26. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodDisruptionBudget Table 3.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.28. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 3.29. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty 3.2.6. /apis/policy/v1/watch/namespaces/{namespace}/poddisruptionbudgets/{name} Table 3.30. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget namespace string object name and auth scope, such as for teams and projects Table 3.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PodDisruptionBudget. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.7. /apis/policy/v1/namespaces/{namespace}/poddisruptionbudgets/{name}/status Table 3.33. Global path parameters Parameter Type Description name string name of the PodDisruptionBudget namespace string object name and auth scope, such as for teams and projects Table 3.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PodDisruptionBudget Table 3.35. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PodDisruptionBudget Table 3.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.37. Body parameters Parameter Type Description body Patch schema Table 3.38. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PodDisruptionBudget Table 3.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.40. Body parameters Parameter Type Description body PodDisruptionBudget schema Table 3.41. HTTP responses HTTP code Reponse body 200 - OK PodDisruptionBudget schema 201 - Created PodDisruptionBudget schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/policy_apis/poddisruptionbudget-policy-v1 |
Chapter 30. KafkaJmxAuthenticationPassword schema reference | Chapter 30. KafkaJmxAuthenticationPassword schema reference Used in: KafkaJmxOptions The type property is a discriminator that distinguishes use of the KafkaJmxAuthenticationPassword type from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword . Property Property type Description type string Must be password . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaJmxAuthenticationPassword-reference |
Chapter 5. Troubleshoot updates | Chapter 5. Troubleshoot updates To troubleshoot MicroShift updates, use the following guide. 5.1. Troubleshooting MicroShift updates In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them. 5.1.1. Update path is blocked by MicroShift version sequence Non-EUS versions of MicroShift require serial updates. For example, if you attempt to update from MicroShift 4.15.5 directly to 4.17.1 , the update fails. You must first update 4.15.5 to 4.16.z , and then you can update from 4.16.z to 4.17.0 . 5.1.2. Update path is blocked by version incompatibility RPM dependency errors result if a MicroShift update is incompatible with the version of Red Hat Enterprise Linux for Edge (RHEL for Edge) or Red Hat Enterprise Linux (RHEL). 5.1.2.1. Compatibility table Check the following compatibility table: Red Hat Device Edge release compatibility matrix Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table: RHEL Version(s) MicroShift Version Supported MicroShift Version -> Version Updates 9.4 4.18 4.18.0 -> 4.18.z 9.4 4.17 4.17.1 -> 4.17.z, 4.17 -> 4.18 9.4 4.16 4.16.0 -> 4.16.z, 4.16 -> 4.17, 4.16 -> 4.18 9.2, 9.3 4.15 4.15.0 -> 4.15.z, 4.15 -> 4.16 on RHEL 9.4 9.2, 9.3 4.14 4.14.0 -> 4.14.z, 4.14 -> 4.15, 4.14 -> 4.16 on RHEL 9.4 5.1.2.2. Version compatibility Check the following update paths: Red Hat build of MicroShift update paths Generally Available Version 4.18.0 to 4.18.z on RHEL 9.4 Generally Available Version 4.17.1 to 4.17.z on RHEL 9.4 Generally Available Version 4.15.0 from RHEL 9.2 to 4.16.0 on RHEL 9.4 Generally Available Version 4.14.0 from RHEL 9.2 to 4.15.0 on RHEL 9.4 5.1.3. OSTree update failed If you updated on an OSTree system, the Greenboot health check automatically logs and acts on system health. A failure can be indicated by a system rollback by Greenboot. In cases where the update failed, but Greenboot did not complete a system rollback, you can troubleshoot using the RHEL for Edge documentation linked in the "Additional resources" section that follows this content. Checking the Greenboot logs manually Manually check the Greenboot logs to verify system health by running the following command: USD sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheck 5.1.4. Manual RPM update failed If you updated by using RPMs on a non-OSTree system, an update failure can be indicated by Greenboot, but the health checks are only informative. Checking the system logs is the step in troubleshooting a manual RPM update failure. You can use Greenboot and sos report to check both the MicroShift update and the host system. Additional resources Enabling systemd journal service data persistency Checking the MicroShift version Stopping the MicroShift service Starting the MicroShift service Composing, installing, and managing RHEL for Edge images Rolling back RHEL for Edge images 5.2. Checking journal logs after updates In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them. The journal logs can assist in diagnosing update failures. Note The default configuration of the systemd journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size. Procedure Get comprehensive MicroShift journal logs by running the following command: USD sudo journalctl -u microshift Check the Greenboot journal logs by running the following command: USD sudo journalctl -u greenboot-healthcheck Examining the comprehensive logs of a specific boot uses three steps. First list the boots, then select the one you want from the list you obtained: List the boots present in the journal logs by running the following command: USD sudo journalctl --list-boots Example output IDX BOOT ID FIRST ENTRY LAST ENTRY 0 681ece6f5c3047e183e9d43268c5527f <Day> <Date> 12:27:58 UTC <Day> <Date>> 13:39:41 UTC #.... Check the journal logs for the specific boot you want by running the following command: USD sudo journalctl --boot <idx_or_boot_id> 1 1 Replace <idx_or_boot_id> with the IDX or the BOOT ID number assigned to the specific boot that you want to check. Check the journal logs for the boot of a specific service by running the following command: USD sudo journalctl --boot <idx_or_boot_id> -u <service_name> 1 2 1 Replace <idx_or_boot_id> with the IDX or the BOOT ID number assigned to the specific boot that you want to check. 2 Replace <service_name> with the name of the service that you want to check. 5.3. Checking the status of greenboot health checks Check the status of greenboot health checks before making changes to the system or during troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running. Procedure To see a report of health check status, use the following command: USD systemctl show --property=SubState --value greenboot-healthcheck.service An output of start means that greenboot checks are still running. An output of exited means that checks have passed and greenboot has exited. Greenboot runs the scripts in the green.d directory when the system is a healthy state. An output of failed means that checks have not passed. Greenboot runs the scripts in red.d directory when the system is in this state and might restart the system. To see a report showing the numerical exit code of the service where 0 means success and non-zero values mean a failure occurred, use the following command: USD systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service To see a report showing a message about boot status, such as Boot Status is GREEN - Health Check SUCCESS , use the following command: USD cat /run/motd.d/boot-status | [
"sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheck",
"sudo journalctl -u microshift",
"sudo journalctl -u greenboot-healthcheck",
"sudo journalctl --list-boots",
"IDX BOOT ID FIRST ENTRY LAST ENTRY 0 681ece6f5c3047e183e9d43268c5527f <Day> <Date> 12:27:58 UTC <Day> <Date>> 13:39:41 UTC #.",
"sudo journalctl --boot <idx_or_boot_id> 1",
"sudo journalctl --boot <idx_or_boot_id> -u <service_name> 1 2",
"systemctl show --property=SubState --value greenboot-healthcheck.service",
"systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service",
"cat /run/motd.d/boot-status"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-troubleshoot-updates |
Chapter 4. Using the config tool to reconfigure Red Hat Quay on OpenShift Container Platform | Chapter 4. Using the config tool to reconfigure Red Hat Quay on OpenShift Container Platform 4.1. Accessing the config editor In the Details section of the QuayRegistry object, the endpoint for the config editor is available, along with a link to the Secret object that contains the credentials for logging into the config editor. For example: 4.1.1. Retrieving the config editor credentials Use the following procedure to retrieve the config editor credentials. Procedure Click on the link for the config editor secret: In the Data section of the Secret details page, click Reveal values to see the credentials for logging into the config editor. For example: 4.1.2. Logging into the config editor Use the following procedure to log into the config editor. Procedure Navigate the config editor endpoint. When prompted, enter the username, for example, quayconfig , and the password. For example: 4.1.3. Changing configuration In the following example, you will update your configuration file by changing the default expiration period of deleted tags. Procedure On the config editor, locate the Time Machine section. Add an expiration period to the Allowed expiration periods box, for example, 4w : Select Validate Configuration Changes to ensure that the changes are valid. Apply the changes by pressing Reconfigure Quay : After applying the changes, the config tool notifies you that the changes made have been submitted to your Red Hat Quay deployment: Note Reconfiguring Red Hat Quay using the config tool UI can lead to the registry being unavailable for a short time while the updated configuration is applied. 4.2. Monitoring reconfiguration in the Red Hat Quay UI You can monitor the reconfiguration of Red Hat Quay in real-time. 4.2.1. QuayRegistry resource After reconfiguring the Red Hat Quay Operator, you can track the progress of the redeployment in the YAML tab for the specific instance of QuayRegistry , in this case, example-registry : Each time the status changes, you will be prompted to reload the data to see the updated version. Eventually, the Red Hat Quay Operator reconciles the changes, and there are be no unhealthy components reported. 4.2.2. Events The Events tab for the QuayRegistry shows some events related to the redeployment. For example: Streaming events, for all resources in the namespace that are affected by the reconfiguration, are available in the OpenShift Container Platform console under Home Events . For example: 4.3. Accessing updated information after reconfiguration Use the following procedure to access the updated config.yaml file using the Red Hat Quay UI and the config bundle. Procedure On the QuayRegistry Details screen, click on the Config Bundle Secret . In the Data section of the Secret details screen, click Reveal values to see the config.yaml file. Check that the change has been applied. In this case, 4w should be in the list of TAG_EXPIRATION_OPTIONS . For example: --- SERVER_HOSTNAME: example-quay-openshift-operators.apps.docs.quayteam.org SETUP_COMPLETE: true SUPER_USERS: - quayadmin TAG_EXPIRATION_OPTIONS: - 2w - 4w --- 4.4. Custom SSL/TLS certificates UI The config tool can be used to load custom certificates to facilitate access to resources like external databases. Select the custom certs to be uploaded, ensuring that they are in PEM format, with an extension .crt . The config tool also displays a list of any uploaded certificates. After you upload your custom SSL/TLS cert, it will appear in the list. For example: 4.5. External Access to the Registry When running on OpenShift Container Platform, the Routes API is available and is automatically used as a managed component. After creating the QuayRegistry object, the external access point can be found in the status block of the QuayRegistry object. For example: status: registryEndpoint: some-quay.my-namespace.apps.mycluster.com 4.6. QuayRegistry API The Red Hat Quay Operator provides the QuayRegistry custom resource API to declaratively manage Quay container registries on the cluster. Use either the OpenShift Container Platform UI or a command-line tool to interact with this API. Creating a QuayRegistry results in the Red Hat Quay Operator deploying and configuring all necessary resources needed to run Red Hat Quay on the cluster. Editing a QuayRegistry results in the Red Hat Quay Operator reconciling the changes and creating, updating, and deleting objects to match the desired configuration. Deleting a QuayRegistry results in garbage collection of all previously created resources. After deletion, the Quay container registry is no longer be available. QuayRegistry API fields are outlined in the following sections. | [
"--- SERVER_HOSTNAME: example-quay-openshift-operators.apps.docs.quayteam.org SETUP_COMPLETE: true SUPER_USERS: - quayadmin TAG_EXPIRATION_OPTIONS: - 2w - 4w ---",
"status: registryEndpoint: some-quay.my-namespace.apps.mycluster.com"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/operator-config-ui |
Chapter 18. Contacting Global Support Services | Chapter 18. Contacting Global Support Services Unless you have a Self-Support subscription, when both the Red Hat Documentation website and Customer Portal fail to provide the answers to your questions, you can contact Global Support Services ( GSS ). 18.1. Gathering Required Information Several items of information should be gathered before contacting GSS . Background Information Ensure you have the following background information at hand before calling GSS : Hardware type, make, and model on which the product runs Software version Latest upgrades Any recent changes to the system An explanation of the problem and the symptoms Any messages or significant information about the issue Note If you ever forget your Red Hat login information, it can be recovered at https://access.redhat.com/site/help/LoginAssistance.html . Diagnostics The diagnostics report for Red Hat Enterprise Linux is required as well. This report is also known as a sosreport and the program to create the report is provided by the sos package. To install the sos package and all its dependencies on your system: To generate the report: For more information, access the Knowledgebase article at https://access.redhat.com/kb/docs/DOC-3593 . Account and Contact Information In order to help you, GSS requires your account information to customize their support, as well contact information to get back to you. When you contact GSS ensure you have your: Red Hat customer number or Red Hat Network (RHN) login name Company name Contact name Preferred method of contact (phone or email) and contact information (phone number or email address) Issue Severity Determining an issue's severity is important to allow the GSS team to prioritize their work. There are four levels of severity. Severity 1 (urgent) A problem that severely impacts your use of the software for production purposes. It halts your business operations and has no procedural workaround. Severity 2 (high) A problem where the software is functioning, but production is severely reduced. It causes a high impact to business operations, and no workaround exists. Severity 3 (medium) A problem that involves partial, non-critical loss of the use of the software. There is a medium to low impact on your business, and business continues to function by utilizing a workaround. Severity 4 (low) A general usage question, report of a documentation error, or a recommendation for a future product improvement. For more information on determining the severity level of an issue, see https://access.redhat.com/support/policy/severity . Once the issue severity has been determined, submit a service request through the Customer Portal under the Connect option, or at https://access.redhat.com/support/contact/technicalSupport.html . Note that you need your Red Hat login details in order to submit service requests. If the severity is level 1 or 2, then follow up your service request with a phone call. Contact information and business hours are found at https://access.redhat.com/support/contact/technicalSupport.html . If you have a premium subscription, then after hours support is available for Severity 1 and 2 cases. Turn-around rates for both premium subscriptions and standard subscription can be found at https://access.redhat.com/support/offerings/production/sla . 18.2. Escalating an Issue If you feel an issue is not being handled correctly or adequately, you can escalate it. There are two types of escalations: Technical escalation If an issue is not being resolved appropriately or if you need a more senior resource to attend to it. Management escalation If the issue has become more severe or you believe it requires a higher priority. More information on escalation, including contacts, is available at https://access.redhat.com/support/escalation . 18.3. Re-opening a Service Request If there is more relevant information regarding a closed service request (such as the problem reoccurring), you can re-open the request via the Red Hat Customer Portal at https://access.redhat.com/support/policy/mgt_escalation.html or by calling your local support center, the details of which can be found at https://access.redhat.com/support/contact/technicalSupport.html . Important In order to re-open a service request, you need the original service-request number. 18.4. Additional Resources For more information, see the resources listed below. Online Documentation Getting Started - The Getting Started page serves as a starting point for people who purchased a Red Hat subscription and offers the Red Hat Welcome Kit and the Quick Guide to Red Hat Support for download. How can a RHEL Self-Support subscription be used? - A Knowledgebase article for customers with a Self-Support subscription. Red Hat Global Support Services and public mailing lists - A Knowledgebase article that answers frequent questions about public Red Hat mailing lists. | [
"yum install sos",
"sosreport"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-contacting_global_support_services |
Backup and restore | Backup and restore Red Hat Advanced Cluster Security for Kubernetes 4.6 Backing up and restoring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/backup_and_restore/index |
Chapter 1. Distributed tracing release notes | Chapter 1. Distributed tracing release notes 1.1. Release notes for Red Hat OpenShift distributed tracing platform 3.0 1.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.1.2. Component versions in the Red Hat OpenShift distributed tracing platform 3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.51.0 Red Hat build of OpenTelemetry OpenTelemetry 0.89.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.3.0 1.1.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.1.3.1. Deprecated functionality In Red Hat OpenShift distributed tracing 3.0, Jaeger and Elasticsearch are deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In Red Hat OpenShift distributed tracing 3.0, Tempo provided by the Tempo Operator and the OpenTelemetry collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.1.3.2. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Support for the ARM architecture. Support for cluster-wide proxy environments. 1.1.3.3. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Jaeger): Fixed support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3546 ) 1.1.3.4. Known issues Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power Systems architectures. 1.1.4. Red Hat OpenShift distributed tracing platform (Tempo) 1.1.4.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Support for the ARM architecture. Support for span request count, duration, and error count (RED) metrics. The metrics can be visualized in the Jaeger console deployed as part of Tempo or in the web console in the Observe menu. 1.1.4.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Fixed support for the custom TLS CA option for connecting to object storage. ( TRACING-3462 ) Fixed support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3523 ) Fixed mTLS when Gateway is not deployed. ( TRACING-3510 ) 1.1.4.3. Known issues Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.1.5. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.1.6. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.2. Release notes for Red Hat OpenShift distributed tracing platform 2.9.2 1.2.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.2.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.2 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat build of OpenTelemetry OpenTelemetry 0.81.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.2.3. CVEs This release fixes CVE-2023-46234 . 1.2.4. Red Hat OpenShift distributed tracing platform (Jaeger) 1.2.4.1. Known issues Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. 1.2.5. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2.5.1. Known issues Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.2.6. Red Hat build of OpenTelemetry Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2.6.1. Known issues Currently, you must manually set operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.2.7. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.2.8. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.3. Release notes for Red Hat OpenShift distributed tracing platform 2.9.1 1.3.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.3.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat build of OpenTelemetry OpenTelemetry 0.81.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.3.3. CVEs This release fixes CVE-2023-44487 . 1.3.4. Red Hat OpenShift distributed tracing platform (Jaeger) 1.3.4.1. Known issues Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. 1.3.5. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.3.5.1. Known issues Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.3.6. Red Hat build of OpenTelemetry Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.3.6.1. Known issues Currently, you must manually set operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.3.7. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.3.8. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.4. Release notes for Red Hat OpenShift distributed tracing platform 2.9 1.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.4.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.9 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat build of OpenTelemetry OpenTelemetry 0.81.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.4.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.4.3.1. New features and enhancements None. 1.4.3.2. Bug fixes Before this update, connection was refused due to a missing gRPC port on the jaeger-query deployment. This issue resulted in transport: Error while dialing: dial tcp :16685: connect: connection refused error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. ( TRACING-3322 ) Before this update, the wrong port was exposed for jaeger-production-query , resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. ( TRACING-2968 ) Before this update, when deploying Service Mesh on single-node OpenShift clusters in disconnected environments, the Jaeger pod frequently went into the Pending state. With this update, the issue is fixed. ( TRACING-3312 ) Before this update, the Jaeger Operator pod restarted with the default memory value due to the reason: OOMKilled error message. With this update, this issue is fixed by removing the resource limits. ( TRACING-3173 ) 1.4.3.3. Known issues Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. 1.4.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.4.4.1. New features and enhancements This release introduces the following enhancements for the distributed tracing platform (Tempo): Support the operator maturity Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of TempoStack instances and the Tempo Operator. Add Ingress and Route configuration for the Gateway. Support the managed and unmanaged states in the TempoStack custom resource. Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled. Expose the Jaeger Query gRPC endpoint on the Query Frontend service. Support multitenancy without Gateway authentication and authorization. 1.4.4.2. Bug fixes Before this update, the Tempo Operator was not compatible with disconnected environments. With this update, the Tempo Operator supports disconnected environments. ( TRACING-3145 ) Before this update, the Tempo Operator with TLS failed to start on OpenShift Container Platform. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. ( TRACING-3091 ) Before this update, the resource limits from the Tempo Operator caused error messages such as reason: OOMKilled . With this update, the resource limits for the Tempo Operator are removed to avoid such errors. ( TRACING-3204 ) 1.4.4.3. Known issues Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.4.5. Red Hat build of OpenTelemetry Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.4.5.1. New features and enhancements This release introduces the following enhancements for the Red Hat build of OpenTelemetry: Support OTLP metrics ingestion. The metrics can be forwarded and stored in the user-workload-monitoring via the Prometheus exporter. Support the operator maturity Level IV, Deep Insights, which enables upgrading and monitoring of OpenTelemetry Collector instances and the Red Hat build of OpenTelemetry Operator. Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS. Collect OpenShift Container Platform resource attributes via the resourcedetection processor. Support the managed and unmanaged states in the OpenTelemetryCollector custom resouce. 1.4.5.2. Bug fixes None. 1.4.5.3. Known issues Currently, you must manually set operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.4.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.4.7. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.5. Release notes for Red Hat OpenShift distributed tracing platform 2.8 1.5.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.5.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.8 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.42 Red Hat build of OpenTelemetry OpenTelemetry 0.74.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 0.1.0 1.5.3. Technology Preview features This release introduces support for the Red Hat OpenShift distributed tracing platform (Tempo) as a Technology Preview feature for Red Hat OpenShift distributed tracing platform. Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The feature uses version 0.1.0 of the Red Hat OpenShift distributed tracing platform (Tempo) and version 2.0.1 of the upstream distributed tracing platform (Tempo) components. You can use the distributed tracing platform (Tempo) to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch. Most users who use the distributed tracing platform (Tempo) instead of Jaeger will not notice any difference in functionality because the distributed tracing platform (Tempo) supports the same ingestion and query protocols as Jaeger and uses the same user interface. If you enable this Technology Preview feature, note the following limitations of the current implementation: The distributed tracing platform (Tempo) currently does not support disconnected installations. ( TRACING-3145 ) When you use the Jaeger user interface (UI) with the distributed tracing platform (Tempo), the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. ( TRACING-3139 ) Expanded support for the Tempo Operator is planned for future releases of the Red Hat OpenShift distributed tracing platform. Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters. For more information about the Tempo Operator, see the Tempo community documentation . 1.5.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.5.5. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.5.6. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.6. Release notes for Red Hat OpenShift distributed tracing platform 2.7 1.6.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.6.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.39 Red Hat build of OpenTelemetry OpenTelemetry 0.63.1 1.6.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.6.4. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.6.5. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.7. Release notes for Red Hat OpenShift distributed tracing platform 2.6 1.7.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.7.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.38 Red Hat build of OpenTelemetry OpenTelemetry 0.60 1.7.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.7.4. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.7.5. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.8. Release notes for Red Hat OpenShift distributed tracing platform 2.5 1.8.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.8.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.36 Red Hat build of OpenTelemetry OpenTelemetry 0.56 1.8.3. New features and enhancements This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. The Operator now automatically enables the OTLP ports: Port 4317 for the OTLP gRPC protocol. Port 4318 for the OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.8.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.8.5. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.8.6. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.9. Release notes for Red Hat OpenShift distributed tracing platform 2.4 1.9.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.9.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.34.1 Red Hat build of OpenTelemetry OpenTelemetry 0.49 1.9.3. New features and enhancements This release adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning by using the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to call the Red Hat Elasticsearch Operator during installation. Important When upgrading to the Red Hat OpenShift distributed tracing platform 2.4, the operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.9.4. Technology Preview features Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform (Jaeger) to use the certificate is a Technology Preview for this release. 1.9.5. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.9.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.9.7. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.10. Release notes for Red Hat OpenShift distributed tracing platform 2.3 1.10.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.10.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.1 Red Hat build of OpenTelemetry OpenTelemetry 0.44.0 1.10.3. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.2 Red Hat build of OpenTelemetry OpenTelemetry 0.44.1-1 1.10.4. New features and enhancements With this release, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.10.5. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.10.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.10.7. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.11. Release notes for Red Hat OpenShift distributed tracing platform 2.2 1.11.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.11.2. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.11.3. Bug fixes This release of the Red Hat OpenShift distributed tracing platform addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.11.4. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.11.5. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.12. Release notes for Red Hat OpenShift distributed tracing platform 2.1 1.12.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.12.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.1.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.29.1 Red Hat build of OpenTelemetry OpenTelemetry 0.41.1 1.12.3. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.12.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.12.5. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.12.6. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.13. Release notes for Red Hat OpenShift distributed tracing platform 2.0 1.13.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis The distributed tracing platform consists of three components: Red Hat OpenShift distributed tracing platform (Jaeger) , which is based on the open source Jaeger project . Red Hat OpenShift distributed tracing platform (Tempo) , which is based on the open source Grafana Tempo project . Red Hat build of OpenTelemetry , which is based on the open source OpenTelemetry project . 1.13.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.0.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.28.0 Red Hat build of OpenTelemetry OpenTelemetry 0.33.0 1.13.3. New features and enhancements This release introduces the following new features and enhancements: Rebrands Red Hat OpenShift Jaeger as the Red Hat OpenShift distributed tracing platform. Updates Red Hat OpenShift distributed tracing platform (Jaeger) Operator to Jaeger 1.28. Going forward, the Red Hat OpenShift distributed tracing platform will only support the stable Operator channel. Channels for individual releases are no longer supported. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OperatorHub. Includes rolling updates to the documentation to support the name change and new features. 1.13.4. Technology Preview features This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat OpenShift distributed tracing platform. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.13.5. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.13.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.13.7. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/distributed_tracing/distributed-tracing-release-notes |
Chapter 8. Performing overcloud post-installation tasks | Chapter 8. Performing overcloud post-installation tasks This chapter contains information about tasks to perform immediately after you create your overcloud. These tasks ensure your overcloud is ready to use. 8.1. Checking overcloud deployment status To check the deployment status of the overcloud, use the openstack overcloud status command. This command returns the result of all deployment steps. Procedure Source the stackrc file: Run the deployment status command: The output of this command displays the status of the overcloud: If your overcloud uses a different name, use the --stack argument to select an overcloud with a different name: Replace <overcloud_name> with the name of your overcloud. 8.2. Creating basic overcloud flavors Validation steps in this guide assume that your installation contains flavors. If you have not already created at least one flavor, complete the following steps to create a basic set of default flavors that have a range of storage and processing capabilities: Procedure Source the overcloudrc file: Run the openstack flavor create command to create a flavor. Use the following options to specify the hardware requirements for each flavor: --disk Defines the hard disk space for a virtual machine volume. --ram Defines the RAM required for a virtual machine. --vcpus Defines the quantity of virtual CPUs for a virtual machine. The following example creates the default overcloud flavors: Note Use USD openstack flavor create --help to learn more about the openstack flavor create command. 8.3. Creating a default tenant network The overcloud requires a default Tenant network so that virtual machines can communicate internally. Procedure Source the overcloudrc file: Create the default Tenant network: Create a subnet on the network: Confirm the created network: These commands create a basic Networking service (neutron) network named default . The overcloud automatically assigns IP addresses from this network to virtual machines using an internal DHCP mechanism. 8.4. Creating a default floating IP network To access your virtual machines from outside of the overcloud, you must configure an external network that provides floating IP addresses to your virtual machines. This procedure contains two examples. Use the example that best suits your environment: Native VLAN (flat network) Non-Native VLAN (VLAN network) Both of these examples involve creating a network with the name public . The overcloud requires this specific name for the default floating IP pool. This name is also important for the validation tests in Section 8.7, "Validating the overcloud" . By default, Openstack Networking (neutron) maps a physical network name called datacentre to the br-ex bridge on your host nodes. You connect the public overcloud network to the physical datacentre and this provides a gateway through the br-ex bridge. Prerequisites A dedicated interface or native VLAN for the floating IP network. Procedure Source the overcloudrc file: Create the public network: Create a flat network for a native VLAN connection: Create a vlan network for non-native VLAN connections: Use the --provider-segment option to define the VLAN that you want to use. In this example, the VLAN is 201 . Create a subnet with an allocation pool for floating IP addresses. In this example, the IP range is 10.1.1.51 to 10.1.1.250 : Ensure that this range does not conflict with other IP addresses in your external network. 8.5. Creating a default provider network A provider network is another type of external network connection that routes traffic from private tenant networks to external infrastructure network. The provider network is similar to a floating IP network but the provider network uses a logical router to connect private networks to the provider network. This procedure contains two examples. Use the example that best suits your environment: Native VLAN (flat network) Non-Native VLAN (VLAN network) By default, Openstack Networking (neutron) maps a physical network name called datacentre to the br-ex bridge on your host nodes. You connect the public overcloud network to the physical datacentre and this provides a gateway through the br-ex bridge. Procedure Source the overcloudrc file: Create the provider network: Create a flat network for a native VLAN connection: Create a vlan network for non-native VLAN connections: Use the --provider-segment option to define the VLAN that you want to use. In this example, the VLAN is 201 . Use the --share option to create a shared network. Alternatively, specify a tenant instead of specifying --share , so that only the tenant has access to the new network. Use the --external option to mark a provider network as external. Only the operator can create ports on an external network. Add a subnet to the provider network to provide DHCP services: Create a router so that other networks can route traffic through the provider network: Set the external gateway for the router to the provider network: Attach other networks to this router. For example, run the following command to attach a subnet subnet1 to the router: This command adds subnet1 to the routing table and allows traffic from virtual machines using subnet1 to route to the provider network. 8.6. Creating additional bridge mappings Floating IP networks can use any bridge, not just br-ex , provided that you map the additional bridge during deployment. Procedure To map a new bridge called br-floating to the floating physical network, include the NeutronBridgeMappings parameter in an environment file: With this method, you can create separate external networks after creating the overcloud. For example, to create a floating IP network that maps to the floating physical network, run the following commands: 8.7. Validating the overcloud The overcloud uses the OpenStack Integration Test Suite (tempest) tool set to conduct a series of integration tests. This section contains information about preparations for running the integration tests. For full instructions about how to use the OpenStack Integration Test Suite, see the Validating your cloud with the Red Hat OpenStack Platform Integration Test Suite . The Integration Test Suite requires a few post-installation steps to ensure successful tests. Procedure If you run this test from the undercloud, ensure that the undercloud host has access to the Internal API network on the overcloud. For example, add a temporary VLAN on the undercloud host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address: Run the integration tests as described in the Validating your cloud with the Red Hat OpenStack Platform Integration Test Suite . After completing the validation, remove any temporary connections to the overcloud Internal API. In this example, use the following commands to remove the previously created VLAN on the undercloud: 8.8. Protecting the overcloud from removal You can set a custom policy for the Orchestration service (heat) to protect your overcloud from being deleted. To re-enable stack deletion, remove the prevent-stack-delete.yaml file from the custom_env_files parameter and run the openstack undercloud install command. Procedure Create an environment file named prevent-stack-delete.yaml . Set the HeatApiPolicies parameter: The heat-deny-action is a default policy that you must include in your undercloud installation. Set the heat-protect-overcloud policy to rule:deny_everybody to prevent anyone from deleting any stacks in the overcloud. Note Setting the overcloud protection to rule:deny_everybody means that you cannot perform any of the following functions: Delete the overcloud. Remove individual Compute or Storage nodes. Replace Controller nodes. Add the prevent-stack-delete.yaml environment file to the custom_env_files parameter in the undercloud.conf file: Run the undercloud installation command to refresh the configuration: | [
"source ~/stackrc",
"openstack overcloud status",
"+-----------+---------------------+---------------------+-------------------+ | Plan Name | Created | Updated | Deployment Status | +-----------+---------------------+---------------------+-------------------+ | overcloud | 2018-05-03 21:24:50 | 2018-05-03 21:27:59 | DEPLOY_SUCCESS | +-----------+---------------------+---------------------+-------------------+",
"openstack overcloud status --stack <overcloud_name>",
"source ~/overcloudrc",
"openstack flavor create m1.tiny --ram 512 --disk 0 --vcpus 1 openstack flavor create m1.smaller --ram 1024 --disk 0 --vcpus 1 openstack flavor create m1.small --ram 2048 --disk 10 --vcpus 1 openstack flavor create m1.medium --ram 3072 --disk 10 --vcpus 2 openstack flavor create m1.large --ram 8192 --disk 10 --vcpus 4 openstack flavor create m1.xlarge --ram 8192 --disk 10 --vcpus 8",
"source ~/overcloudrc",
"(overcloud) USD openstack network create default",
"(overcloud) USD openstack subnet create default --network default --gateway 172.20.1.1 --subnet-range 172.20.0.0/16",
"(overcloud) USD openstack network list +-----------------------+-------------+--------------------------------------+ | id | name | subnets | +-----------------------+-------------+--------------------------------------+ | 95fadaa1-5dda-4777... | default | 7e060813-35c5-462c-a56a-1c6f8f4f332f | +-----------------------+-------------+--------------------------------------+",
"source ~/overcloudrc",
"(overcloud) USD openstack network create public --external --provider-network-type flat --provider-physical-network datacentre",
"(overcloud) USD openstack network create public --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 201",
"(overcloud) USD openstack subnet create public --network public --dhcp --allocation-pool start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24",
"source ~/overcloudrc",
"(overcloud) USD openstack network create provider --external --provider-network-type flat --provider-physical-network datacentre --share",
"(overcloud) USD openstack network create provider --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 201 --share",
"(overcloud) USD openstack subnet create provider-subnet --network provider --dhcp --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 --subnet-range 10.9.101.0/24",
"(overcloud) USD openstack router create external",
"(overcloud) USD openstack router set --external-gateway provider external",
"(overcloud) USD openstack router add subnet external subnet1",
"parameter_defaults: NeutronBridgeMappings: \"datacentre:br-ex,floating:br-floating\"",
"source ~/overcloudrc (overcloud) USD openstack network create public --external --provider-physical-network floating --provider-network-type vlan --provider-segment 105 (overcloud) USD openstack subnet create public --network public --dhcp --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 --subnet-range 10.1.2.0/24",
"source ~/stackrc (undercloud) USD sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal (undercloud) USD sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201",
"source ~/stackrc (undercloud) USD sudo ovs-vsctl del-port vlan201",
"parameter_defaults: HeatApiPolicies: heat-deny-action: key: 'actions:action' value: 'rule:deny_everybody' heat-protect-overcloud: key: 'stacks:delete' value: 'rule:deny_everybody'",
"custom_env_files = prevent-stack-delete.yaml",
"openstack undercloud install"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_performing-overcloud-post-installation-tasks |
Chapter 3. AMQ Streams deployment of Kafka | Chapter 3. AMQ Streams deployment of Kafka Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability. A typical deployment incorporating Kafka components might include: Kafka cluster of broker nodes ZooKeeper cluster of replicated ZooKeeper instances Kafka Connect cluster for external data connections Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster Kafka Exporter to extract additional Kafka metrics data for monitoring Kafka Bridge to make HTTP-based requests to the Kafka cluster Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect. 3.1. Kafka component architecture A cluster of Kafka brokers is main part of the Apache Kafka project responsible for delivering messages. A broker uses Apache ZooKeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache ZooKeeper cluster has to be ready. Each of the other Kafka components interact with the Kafka cluster to perform specific roles. Kafka component interaction Apache ZooKeeper Apache ZooKeeper is a core dependency for Kafka as it provides a cluster coordination service, storing and tracking the status of brokers and consumers. ZooKeeper is also used for leader election of partitions. Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed. A source connector pushes external data into Kafka. A sink connector extracts data out of Kafka External data is translated and transformed into the appropriate format. You can deploy Kafka Connect with Source2Image support, which provides a convenient way to include connectors. Kafka MirrorMaker Kafka MirrorMaker replicates data between two Kafka clusters, within or across data centers. MirrorMaker takes messages from a source Kafka cluster and writes them to a target Kafka cluster. Kafka Bridge Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. Kafka Exporter Kafka Exporter extracts data for analysis as Prometheus metrics, primarily data relating to offsets, consumer groups, consumer lag and topics. Consumer lag is the delay between the last message written to a partition and the message currently being picked up from that partition by a consumer 3.2. Kafka Bridge interface The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to AMQ Streams, without the need for client applications to interpret the Kafka protocol. The API has two main resources - consumers and topics - that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. 3.2.1. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Clients can produce and consume messages without the requirement to use the native Kafka protocol. Additional resources To view the API documentation, including example requests and responses, see the Kafka Bridge API reference . 3.2.2. Supported clients for the Kafka Bridge You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster. Internal clients Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself. Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running. External clients can access the Kafka Bridge through an OpenShift Route, a loadbalancer service, or using an Ingress. HTTP internal and external client integration | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_streams_on_openshift_overview/kafka-components_str |
Chapter 1. Preparing to install on IBM Power | Chapter 1. Preparing to install on IBM Power 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on IBM Power You can install a cluster on IBM Power(R) infrastructure that you provision, by using one of the following methods: Installing a cluster on IBM Power(R) : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision. Installing a cluster on IBM Power(R) in a restricted network : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power/preparing-to-install-on-ibm-power |
Chapter 3. Configuring the Maven settings.xml file for the online repository | Chapter 3. Configuring the Maven settings.xml file for the online repository You can use the online Maven repository with your Maven project by configuring your user settings.xml file. This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Open the Maven ~/.m2/settings.xml file in a text editor or integrated development environment (IDE). Note If there is not a settings.xml file in the ~/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/.m2/conf/ directory into the ~/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 3.1. Creating a Spring Boot business application from Maven archetypes You can use Maven archetypes to create business applications that use the Spring Boot framework. Doing this by-passes the need to install and configure Red Hat Process Automation Manager. You can create a business asset project, a data model project, or a service project: Prerequisites Apache Maven 3.5 or higher Procedure Enter one of the following commands to create your Spring Boot business application project. In these commands, replace business-application with the name of your business application: To create a business asset project that contains business processes, rules, and forms: This command creates a project which generates business-application-kjar-1.0-SNAPSHOT.jar . To create a data model asset project that provides common data structures that are shared between the service projects and business assets projects: This command creates a project which generates business-application-model-1.0-SNAPSHOT.jar . To create a dynamic assets project that provides case management capabilities: This command creates a project which generates business-application-kjar-1.0-SNAPSHOT.jar . To create a service project, a deployable project that provides a service with various capabilities including the business logic that operates your business, enter one of the following commands: Business automation covers features for process management, case management, decision management and optimization. These will be by default configured in the service project of your business application but you can turn them off through configuration. To create a business application service project (the default configuration) that includes features for process management, case management, decision management, and optimization: Decision management covers mainly decision and rules related features. To create a decision management service project that includes decision and rules-related features: Business optimization covers planning problems and solutions related features. To create a Red Hat build of OptaPlanner service project to help you solve planning problems and solutions related features: These commands create a project which generates business-application-service-1.0-SNAPSHOT.jar . In most cases, a service project includes business assets and data model projects. A business application can split services into smaller component service projects for better manageability. 3.2. Configuring an Red Hat Process Automation Manager Spring Boot project for the online Maven repository After you create your Red Hat Process Automation Manager Spring Boot project, configure it with the online Maven Repository to store your application data. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure In the directory that contains your Red Hat Process Automation Manager Spring Boot application, open the <BUSINESS-APPLICATION>-service/pom.xml file in a text editor or IDE, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Add the following repository to the repositories element: <repository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </repository> Add the following plug-in repository to the pluginRepositories element: Note If your pom.xml file does not have the pluginRepositories element, add it as well. <pluginRepository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </pluginRepository> Doing this adds the productized Maven repository to your business application. 3.3. Downloading and configuring the Red Hat Process Automation Manager Maven repository If you do not want to use the online Maven repository, you can download and configure the Red Hat Process Automation Manager Maven repository. The Red Hat Process Automation Manager Maven repository contains many of the requirements that Java developers typically use to build their applications. This procedure describes how to edit the Maven settings.xml file to configure the Red Hat Process Automation Manager Maven repository. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Prerequisites You have created a Red Hat Process Automation Manager Spring Boot project. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the following product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13 Maven Repository ( rhpam-7.13.5-maven-repository.zip ). Extract the downloaded archive. Change to the ~/.m2/ directory and open the Maven settings.xml file in a text editor or integrated development environment (IDE). Add the following lines to the <profiles> element of the Maven settings.xml file, where <MAVEN_REPOSITORY> is the path of the Maven repository that you downloaded. The format of <MAVEN_REPOSITORY> must be file://USDPATH , for example file:///home/userX/rhpam-7.13.5.GA-maven-repository/maven-repository . <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the Maven settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project, where <ARTIFACT_NAME> is the name of a missing artifact and <PROJECT_NAME> is the name of the project you are trying to build: Missing artifact <PROJECT_NAME> [ERROR] Failed to execute goal on project <ARTIFACT_NAME> ; Could not resolve dependencies for <PROJECT_NAME> To resolve the issue, delete the cached version of your local repository located in the ~/.m2/repository directory to force a download of the latest Maven artifacts. | [
"<!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-kjar -Dversion=1.0-SNAPSHOT -Dpackage=com.company",
"mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-model-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-model -Dversion=1.0-SNAPSHOT -Dpackage=com.company.model",
"mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DcaseProject=true -DgroupId=com.company -DartifactId=business-application-kjar -Dversion=1.0-SNAPSHOT -Dpackage=com.company",
"mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=bpm",
"mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=brm",
"mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024 -DgroupId=com.company -DartifactId=business-application-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=planner",
"<repository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </repository>",
"<pluginRepository> <id>jboss-enterprise-repository-group</id> <name>Red Hat JBoss Enterprise Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <layout>default</layout> <releases> <updatePolicy>never</updatePolicy> </releases> <snapshots> <updatePolicy>daily</updatePolicy> </snapshots> </pluginRepository>",
"<profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url><MAVEN_REPOSITORY></url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/proc-online-maven_business-applications |
Chapter 67. Microsoft SQL Server Sink | Chapter 67. Microsoft SQL Server Sink Send data to a Microsoft SQL Server Database. This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query: 'INSERT INTO accounts (username,city) VALUES (:#username,:#city)' The Kamelet needs to receive as input something like: '{ "username":"oscerd", "city":"Rome"}' 67.1. Configuration Options The following table summarizes the configuration options available for the sqlserver-sink Kamelet: Property Name Description Type Default Example databaseName * Database Name The Database Name we are pointing string password * Password The password to use for accessing a secured SQL Server Database string query * Query The Query to execute against the SQL Server Database string "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName * Server Name Server Name for the data source string "localhost" username * Username The username to use for accessing a secured SQL Server Database string serverPort Server Port Server Port for the data source string 1433 Note Fields marked with an asterisk (*) are mandatory. 67.2. Dependencies At runtime, the sqlserver-sink Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:sql mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001 mvn:com.microsoft.sqlserver:mssql-jdbc:9.2.1.jre11 67.3. Usage This section describes how you can use the sqlserver-sink . 67.3.1. Knative Sink You can use the sqlserver-sink Kamelet as a Knative sink by binding it to a Knative object. sqlserver-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 67.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 67.3.1.2. Procedure for using the cluster CLI Save the sqlserver-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f sqlserver-sink-binding.yaml 67.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 67.3.2. Kafka Sink You can use the sqlserver-sink Kamelet as a Kafka sink by binding it to a Kafka topic. sqlserver-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 67.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 67.3.2.2. Procedure for using the cluster CLI Save the sqlserver-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f sqlserver-sink-binding.yaml 67.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 67.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/sqlserver-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"",
"apply -f sqlserver-sink-binding.yaml",
"kamel bind channel:mychannel sqlserver-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sqlserver-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sqlserver-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"",
"apply -f sqlserver-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\""
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/microsoft-sql-server-sink |
Chapter 3. Migrating to Apache Camel 3 | Chapter 3. Migrating to Apache Camel 3 This guide provides information on migrating from Red Hat Fuse 7 to Camel 3 on Spring Boot. NOTE There are important differences between Fuse 7 and Camel 3 in the components, such as modularization and XML Schema changes. See each component section for details. docs/modules/camel-spring-boot/camel-spring-boot-migration-guide/ref-migrating-to-camel-spring-boot-3.adoc == Java versions Camel 3 supports Java 17 and Java 11 but not Java 8. In Java 11 the JAXB modules have been removed from the JDK, therefore you will need to add them as Maven dependencies (if you use JAXB such as when using XML DSL or the camel-jaxb component): <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency> NOTE : The Java Platform, Standard Edition 11 Development Kit (JDK 11) is deprecated in Camel Spring Boot 3.x release version and not supported from the further 4.x release versions. 3.1. Modularization of camel-core In Camel 3.x, camel-core has been split into many JARs as follows: camel-api camel-base camel-caffeine-lrucache camel-cloud camel-core camel-jaxp camel-main camel-management-api camel-management camel-support camel-util camel-util-json Maven users of Apache Camel can keep using the dependency camel-core which has transitive dependencies on all of its modules, except for camel-main , and therefore no migration is needed. 3.2. Modularization of Components In Camel 3.x, some of the camel-core components are moved into individual components. camel-attachments camel-bean camel-browse camel-controlbus camel-dataformat camel-dataset camel-direct camel-directvm camel-file camel-language camel-log camel-mock camel-ref camel-rest camel-saga camel-scheduler camel-seda camel-stub camel-timer camel-validator camel-vm camel-xpath camel-xslt camel-xslt-saxon camel-zip-deflater 3.3. Default Shutdown Strategy Red Hat build of Apache Camel supports a shutdown strategy using org.apache.camel.spi.ShutdownStrategy which is responsible for shutting down routes in a graceful manner. Red Hat build of Apache Camel provides a default strategy in the org.apache.camel.impl.engine.DefaultShutdownStrategy to handle the graceful shutdown of the routes. Note The DefaultShutdownStrategy class has been moved from package org.apache.camel.impl to org.apache.camel.impl.engine in Apache Camel 3.x. When you configure a simple scheduled route policy to stop a route, the route stopping algorithm is automatically integrated with the graceful shutdown procedure. This means that the task waits until the current exchange has finished processing before shutting down the route. You can set a timeout, however, that forces the route to stop after the specified time, irrespective of whether or not the route has finished processing the exchange. During graceful shutdown, If you enable the DEBUG logging level on org.apache.camel.impl.engine.DefaultShutdownStrategy , then it logs the same inflight exchange information. If you do not want to see these logs, you can turn this off by setting the option logInflightExchangesOnTimeout to false. 3.4. Multiple CamelContexts per application not supported Support for multiple CamelContexts has been removed and only one CamelContext per deployment is recommended and supported. The context attribute on the various Camel annotations such as @EndpointInject , @Produce , @Consume etc. has therefore been removed. 3.5. Deprecated APIs and Components All deprecated APIs and components from Camel 2.x have been removed in Camel 3. 3.5.1. Removed components All deprecated components from Camel 2.x are removed in Camel 3.x, including the old camel-http , camel-hdfs , camel-mina , camel-mongodb , camel-netty , camel-netty-http , camel-quartz , camel-restlet and camel-rx components. Removed camel-jibx component. Removed camel-boon dataformat. Removed the camel-linkedin component as the Linkedin API 1.0 is no longer supported . Support for the new 2.0 API is tracked by CAMEL-13813 . The camel-zookeeper has its route policy functionality removed, instead use ZooKeeperClusterService or the camel-zookeeper-master component. The camel-jetty component no longer supports producer (which has been removed), use camel-http component instead. The twitter-streaming component has been removed as it relied on the deprecated Twitter Streaming API and is no longer functional. 3.5.2. Renamed components Following components are renamed in Camel 3.x. The Camel-microprofile-metrics has been renamed to camel-micrometer The test component has been renamed to dataset-test and moved out of camel-core into camel-dataset JAR. The http4 component has been renamed to http , and it's corresponding component package from org.apache.camel.component.http4 to org.apache.camel.component.http . The supported schemes are now only http and https . The hdfs2 component has been renamed to hdfs , and it's corresponding component package from org.apache.camel.component.hdfs2 to org.apache.camel.component.hdfs . The supported scheme is now hdfs . The mina2 component has been renamed to mina , and it's corresponding component package from org.apache.camel.component.mina2 to org.apache.camel.component.mina . The supported scheme is now mina . The mongodb3 component has been renamed to mongodb , and it's corresponding component package from org.apache.camel.component.mongodb3 to org.apache.camel.component.mongodb . The supported scheme is now mongodb . The netty4-http component has been renamed to netty-http , and it's corresponding component package from org.apache.camel.component.netty4.http to org.apache.camel.component.netty.http . The supported scheme is now netty-http . The netty4 component has been renamed to netty , and it's corresponding component package from org.apache.camel.component.netty4 to org.apache.camel.component.netty . The supported scheme is now netty . The quartz2 component has been renamed to quartz , and it's corresponding component package from org.apache.camel.component.quartz2 to org.apache.camel.component.quartz . The supported scheme is now quartz . The rxjava2 component has been renamed to rxjava , and it's corresponding component package from org.apache.camel.component.rxjava2 to org.apache.camel.component.rxjava . Renamed camel-jetty9 to camel-jetty . The supported scheme is now jetty . 3.6. Changes to Camel components 3.6.1. Mock component The mock component has been moved out of camel-core . Because of this a number of methods on its assertion clause builder are removed. 3.6.2. ActiveMQ If you are using the activemq-camel component, then you should migrate to use camel-activemq component, where the component name has changed from org.apache.activemq.camel.component.ActiveMQComponent to org.apache.camel.component.activemq.ActiveMQComponent . 3.6.3. AWS The component camel-aws has been split into multiple components: camel-aws-cw camel-aws-ddb (which contains both ddb and ddbstreams components) camel-aws-ec2 camel-aws-iam camel-aws-kinesis (which contains both kinesis and kinesis-firehose components) camel-aws-kms camel-aws-lambda camel-aws-mq camel-aws-s3 camel-aws-sdb camel-aws-ses camel-aws-sns camel-aws-sqs camel-aws-swf Note It is recommended to add specific dependencies for these components. 3.6.4. Camel CXF The camel-cxf JAR has been divided into SOAP vs REST and Spring and non Spring JARs. It is recommended to choose the specific JAR from the following list when migrating from came-cxf . camel-cxf-soap camel-cxf-spring-soap camel-cxf-rest camel-cxf-spring-rest camel-cxf-transport camel-cxf-spring-transport For example, if you were using CXF for SOAP and with Spring XML, then select camel-cxf-spring-soap and camel-cxf-spring-transport when migrating from camel-cxf . When using Spring Boot, choose from the following starter when you migrate from camel-cxf-starter to SOAP or REST: camel-cxf-soap-starter camel-cxf-rest-starter 3.6.4.1. Camel CXF changed namespaces The camel-cxf XML XSD schemas has also changed namespaces. Table 3.1. Changes to namespaces Old Namespace New Namespace http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxrs http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxrs/camel-cxf.xsd The camel-cxf SOAP component is moved to a new jaxws sub-package, that is, org.apache.camel.component.cxf is now org.apache.camel.component.cxf.jaws . For example, the CxfComponent class is now located in org.apache.camel.component.cxf.jaxws . 3.6.5. FHIR The camel-fhir component has upgraded it's hapi-fhir dependency to 4.1.0. The default FHIR version has been changed to R4. Therefore, if DSTU3 is desired it has to be explicitly set. 3.6.6. Kafka The camel-kafka component has removed the options bridgeEndpoint and circularTopicDetection as this is no longer needed as the component is acting as bridging would work on Camel 2.x. In other words camel-kafka will send messages to the topic from the endpoint uri. To override this use the KafkaConstants.OVERRIDE_TOPIC header with the new topic. See more details in the camel-kafka component documentation. 3.6.7. Telegram The camel-telegram component has moved the authorization token from uri-path to a query parameter instead, e.g. migrate to 3.6.8. JMX If you run Camel standalone with just camel-core as a dependency, and you want JMX enabled out of the box, then you need to add camel-management as a dependency. For using ManagedCamelContext you now need to get this extension from CamelContext as follows: 3.6.9. XSLT The XSLT component has moved out of camel-core into camel-xslt and camel-xslt-saxon . The component is separated so camel-xslt is for using the JDK XSTL engine (Xalan), and camel-xslt-saxon is when you use Saxon. This means that you should use xslt and xslt-saxon as component name in your Camel endpoint URIs. If you are using XSLT aggregation strategy, then use org.apache.camel.component.xslt.saxon.XsltSaxonAggregationStrategy for Saxon support. And use org.apache.camel.component.xslt.saxon.XsltSaxonBuilder for Saxon support if using xslt builder. Also notice that allowStax is also only supported in camel-xslt-saxon as this is not supported by the JDK XSLT. 3.6.10. XML DSL Migration The XML DSL has been changed slightly. The custom load balancer EIP has changed from <custom> to <customLoadBalancer> The XMLSecurity data format has renamed the attribute keyOrTrustStoreParametersId to keyOrTrustStoreParametersRef in the <secureXML> tag. The <zipFile> data format has been renamed to <zipfile> . 3.7. Migrating Camel Maven Plugins The camel-maven-plugin has been split up into two maven plugins: camel-maven-plugin camel-maven-plugin has the run goal, which is intended for quickly running Camel applications standalone. See https://camel.apache.org/manual/camel-maven-plugin.html for more information. camel-report-maven-plugin The camel-report-maven-plugin has the validate and route-coverage goals which is used for generating reports of your Camel projects such as validating Camel endpoint URIs and route coverage reports, etc. See https://camel.apache.org/manual/camel-report-maven-plugin.html for more information. | [
"<dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency>",
"2015-01-12 13:23:23,656 [- ShutdownTask] INFO DefaultShutdownStrategy - There are 1 inflight exchanges: InflightExchange: [exchangeId=ID-test-air-62213-1421065401253-0-3, fromRouteId=route1, routeId=route1, nodeId=delay1, elapsed=2007, duration=2017]",
"context.getShutdownStrategegy().setLogInflightExchangesOnTimeout(false);",
"telegram:bots/myTokenHere",
"telegram:bots?authorizationToken=myTokenHere",
"ManagedCamelContext managed = camelContext.getExtension(ManagedCamelContext.class);"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/migrating-to-camel-spring-boot3 |
Chapter 350. Timer Component | Chapter 350. Timer Component Available as of Camel version 1.0 The timer: component is used to generate message exchanges when a timer fires You can only consume events from this endpoint. 350.1. URI format Where name is the name of the Timer object, which is created and shared across endpoints. So if you use the same name for all your timer endpoints, only one Timer object and thread will be used. You can append query options to the URI in the following format, ?option=value&option=value&... Note: The IN body of the generated exchange is null . So exchange.getIn().getBody() returns null . TIP:*Advanced Scheduler* See also the Quartz component that supports much more advanced scheduling. TIP:*Specify time in human friendly format* In Camel 2.3 onwards you can specify the time in human friendly syntax . 350.2. Options The Timer component has no options. The Timer endpoint is configured using URI syntax: with the following path and query parameters: 350.2.1. Path Parameters (1 parameters): Name Description Default Type timerName Required The name of the timer String 350.2.2. Query Parameters (12 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean delay (consumer) The number of milliseconds to wait before the first event is generated. Should not be used in conjunction with the time option. The default value is 1000. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long fixedRate (consumer) Events take place at approximately regular intervals, separated by the specified period. false boolean period (consumer) If greater than 0, generate periodic events every period milliseconds. The default value is 1000. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long repeatCount (consumer) Specifies a maximum limit of number of fires. So if you set it to 1, the timer will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern daemon (advanced) Specifies whether or not the thread associated with the timer endpoint runs as a daemon. The default value is true. true boolean pattern (advanced) Allows you to specify a custom Date pattern to use for setting the time option using URI syntax. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean time (advanced) A java.util.Date the first event should be generated. If using the URI, the pattern expected is: yyyy-MM-dd HH:mm:ss or yyyy-MM-dd'T'HH:mm:ss. Date timer (advanced) To use a custom Timer Timer 350.3. Exchange Properties When the timer is fired, it adds the following information as properties to the Exchange : Name Type Description Exchange.TIMER_NAME String The value of the name option. Exchange.TIMER_TIME Date The value of the time option. Exchange.TIMER_PERIOD long The value of the period option. Exchange.TIMER_FIRED_TIME Date The time when the consumer fired. Exchange.TIMER_COUNTER Long Camel 2.8: The current fire counter. Starts from 1. 350.4. Sample To set up a route that generates an event every 60 seconds: from("timer://foo?fixedRate=true&period=60000").to("bean:myBean?method=someMethodName"); Tip Instead of 60000 you can use period=60s which is more friendly to read. The above route will generate an event and then invoke the someMethodName method on the bean called myBean in the Registry such as JNDI or Spring. And the route in Spring DSL: <route> <from uri="timer://foo?fixedRate=true&period=60000"/> <to uri="bean:myBean?method=someMethodName"/> </route> 350.5. Firing as soon as possible Available as of Camel 2.17 You may want to fire messages in a Camel route as soon as possible you can use a negative delay: <route> <from uri="timer://foo?delay=-1"/> <to uri="bean:myBean?method=someMethodName"/> </route> In this way the timer will fire messages immediately. You can also specify a repeatCount parameter in conjunction with a negative delay to stop firing messages after a fixed number has been reached. If you don't specify a repeatCount then the timer will continue firing messages until the route will be stopped. 350.6. Firing only once Available as of Camel 2.8 You may want to fire a message in a Camel route only once, such as when starting the route. To do that you use the repeatCount option as shown: <route> <from uri="timer://foo?repeatCount=1"/> <to uri="bean:myBean?method=someMethodName"/> </route> 350.7. See Also Scheduler Quartz | [
"timer:name[?options]",
"timer:timerName",
"from(\"timer://foo?fixedRate=true&period=60000\").to(\"bean:myBean?method=someMethodName\");",
"<route> <from uri=\"timer://foo?fixedRate=true&period=60000\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>",
"<route> <from uri=\"timer://foo?delay=-1\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>",
"<route> <from uri=\"timer://foo?repeatCount=1\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/timer-component |
4.5. Configuring Console Options | 4.5. Configuring Console Options 4.5.1. Console Options Connection protocols are the underlying technology used to provide graphical consoles for virtual machines and allow users to work with virtual machines in a similar way as they would with physical machines. Red Hat Virtualization currently supports the following connection protocols: SPICE Simple Protocol for Independent Computing Environments (SPICE) is the recommended connection protocol for both Linux virtual machines and Windows virtual machines. To open a console to a virtual machine using SPICE, use Remote Viewer. VNC Virtual Network Computing (VNC) can be used to open consoles to both Linux virtual machines and Windows virtual machines. To open a console to a virtual machine using VNC, use Remote Viewer or a VNC client. RDP Remote Desktop Protocol (RDP) can only be used to open consoles to Windows virtual machines, and is only available when you access a virtual machines from a Windows machine on which Remote Desktop has been installed. Before you can connect to a Windows virtual machine using RDP, you must set up remote sharing on the virtual machine and configure the firewall to allow remote desktop connections. Note SPICE is not currently supported on virtual machines running Windows 8. If a Windows 8 virtual machine is configured to use the SPICE protocol, it will detect the absence of the required SPICE drivers and automatically fall back to using RDP. 4.5.1.1. Accessing Console Options You can configure several options for opening graphical consoles for virtual machines in the Administration Portal. Accessing Console Options Click Compute Virtual Machines and select a running virtual machine. Click Console Console Options . Note You can configure the connection protocols and video type in the Console tab of the Edit Virtual Machine window in the Administration Portal. Additional options specific to each of the connection protocols, such as the keyboard layout when using the VNC connection protocol, can be configured. See Section A.1.4, "Virtual Machine Console Settings Explained" for more information. 4.5.1.2. SPICE Console Options When the SPICE connection protocol is selected, the following options are available in the Console Options window. SPICE Options Map control-alt-del shortcut to ctrl+alt+end : Select this check box to map the Ctrl + Alt + Del key combination to Ctrl + Alt + End inside the virtual machine. Enable USB Auto-Share : Select this check box to automatically redirect USB devices to the virtual machine. If this option is not selected, USB devices will connect to the client machine instead of the guest virtual machine. To use the USB device on the guest machine, manually enable it in the SPICE client menu. Open in Full Screen : Select this check box for the virtual machine console to automatically open in full screen when you connect to the virtual machine. Press SHIFT + F11 to toggle full screen mode on or off. Enable SPICE Proxy : Select this check box to enable the SPICE proxy. 4.5.1.3. VNC Console Options When the VNC connection protocol is selected, the following options are available in the Console Options window. Console Invocation Native Client : When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Viewer. noVNC : When you connect to the console of the virtual machine, a browser tab is opened that acts as the console. VNC Options Map control-alt-delete shortcut to ctrl+alt+end : Select this check box to map the Ctrl + Alt + Del key combination to Ctrl + Alt + End inside the virtual machine. 4.5.1.4. RDP Console Options When the RDP connection protocol is selected, the following options are available in the Console Options window. Console Invocation Auto : The Manager automatically selects the method for invoking the console. Native client : When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Desktop. RDP Options Use Local Drives : Select this check box to make the drives on the client machine accessible on the guest virtual machine. 4.5.2. Remote Viewer Options 4.5.2.1. Remote Viewer Options When you specify the Native client console invocation option, you will connect to virtual machines using Remote Viewer. The Remote Viewer window provides a number of options for interacting with the virtual machine to which it is connected. Table 4.1. Remote Viewer Options Option Hotkey File Screenshot : Takes a screen capture of the active window and saves it in a location of your specification. USB device selection : If USB redirection has been enabled on your virtual machine, the USB device plugged into your client machine can be accessed from this menu. Quit : Closes the console. The hot key for this option is Shift + Ctrl + Q . View Full screen : Toggles full screen mode on or off. When enabled, full screen mode expands the virtual machine to fill the entire screen. When disabled, the virtual machine is displayed as a window. The hot key for enabling or disabling full screen is SHIFT + F11 . Zoom : Zooms in and out of the console window. Ctrl + + zooms in, Ctrl + - zooms out, and Ctrl + 0 returns the screen to its original size. Automatically resize : Tick to enable the guest resolution to automatically scale according to the size of the console window. Displays : Allows users to enable and disable displays for the guest virtual machine. Send key Ctrl + Alt + Del : On a Red Hat Enterprise Linux virtual machine, it displays a dialog with options to suspend, shut down or restart the virtual machine. On a Windows virtual machine, it displays the task manager or Windows Security dialog. Ctrl + Alt + Backspace : On a Red Hat Enterprise Linux virtual machine, it restarts the X sever. On a Windows virtual machine, it does nothing. Ctrl + Alt + F1 Ctrl + Alt + F2 Ctrl + Alt + F3 Ctrl + Alt + F4 Ctrl + Alt + F5 Ctrl + Alt + F6 Ctrl + Alt + F7 Ctrl + Alt + F8 Ctrl + Alt + F9 Ctrl + Alt + F10 Ctrl + Alt + F11 Ctrl + Alt + F12 Printscreen : Passes the Printscreen keyboard option to the virtual machine. Help The About entry displays the version details of Virtual Machine Viewer that you are using. Release Cursor from Virtual Machine SHIFT + F12 4.5.2.2. Remote Viewer Hotkeys You can access the hotkeys for a virtual machine in both full screen mode and windowed mode. If you are using full screen mode, you can display the menu containing the button for hotkeys by moving the mouse pointer to the middle of the top of the screen. If you are using windowed mode, you can access the hotkeys via the Send key menu on the virtual machine window title bar. Note If vdagent is not running on the client machine, the mouse can become captured in a virtual machine window if it is used inside a virtual machine and the virtual machine is not in full screen. To unlock the mouse, press Shift + F12 . 4.5.2.3. Manually Associating console.vv Files with Remote Viewer If you are prompted to download a console.vv file when attempting to open a console to a virtual machine using the native client console option, and Remote Viewer is already installed, then you can manually associate console.vv files with Remote Viewer so that Remote Viewer can automatically use those files to open consoles. Manually Associating console.vv Files with Remote Viewer Start the virtual machine. Open the Console Options window: In the Administration Portal, click Console Console Options . In the VM Portal, click the virtual machine name and click the pencil icon beside Console . Change the console invocation method to Native client and click OK . Attempt to open a console to the virtual machine, then click Save when prompted to open or save the console.vv file. Click the location on your local machine where you saved the file. Double-click the console.vv file and select Select a program from a list of installed programs when prompted. In the Open with window, select Always use the selected program to open this kind of file and click the Browse button. Click the C:\Users_[user name]_\AppData\Local\virt-viewer\bin directory and select remote-viewer.exe . Click Open and then click OK . When you use the native client console invocation option to open a console to a virtual machine, Remote Viewer will automatically use the console.vv file that the Red Hat Virtualization Manager provides to open a console to that virtual machine without prompting you to select the application to use. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-configuring_console_options |
Support | Support OpenShift Container Platform 4.9 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc get route prometheus-k8s -n openshift-monitoring -o jsonpath=\"{.spec.host}\"",
"{__name__=~\"cluster:usage:.*|count:up0|count:up1|cluster_version|cluster_version_available_updates|cluster_operator_up|cluster_operator_conditions|cluster_version_payload|cluster_installer|cluster_infrastructure_provider|cluster_feature_set|instance:etcd_object_counts:sum|ALERTS|code:apiserver_request_total:rate:sum|cluster:capacity_cpu_cores:sum|cluster:capacity_memory_bytes:sum|cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum|openshift:cpu_usage_cores:sum|openshift:memory_usage_bytes:sum|workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum|cluster:virt_platform_nodes:sum|cluster:node_instance_type_count:sum|cnv:vmi_status_running:count|node_role_os_version_machine:cpu_capacity_cores:sum|node_role_os_version_machine:cpu_capacity_sockets:sum|subscription_sync_total|csv_succeeded|csv_abnormal|ceph_cluster_total_bytes|ceph_cluster_total_used_raw_bytes|ceph_health_status|job:ceph_osd_metadata:count|job:kube_pv:count|job:ceph_pools_iops:total|job:ceph_pools_iops_bytes:total|job:ceph_versions_running:count|job:noobaa_total_unhealthy_buckets:sum|job:noobaa_bucket_count:sum|job:noobaa_total_object_count:sum|noobaa_accounts_num|noobaa_total_usage|console_url|cluster:network_attachment_definition_instances:max|cluster:network_attachment_definition_enabled_instance_up:max|insightsclient_request_send_total|cam_app_workload_migrations|cluster:apiserver_current_inflight_requests:sum:max_over_time:2m|cluster:telemetry_selected_series:count\",alertstate=~\"firing|\"}",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"initContainers: - name: insights-operator image: <your_insights_operator_image_version> terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job- <your_job>",
"oc logs -n openshift-insights insights-operator-job- <your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc get nodes",
"oc debug node/my-cluster-node",
"oc new-project dummy",
"oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'",
"oc debug node/my-cluster-node",
"chroot /host",
"toolbox",
"sosreport -k crio.all=on -k crio.logs=on 1",
"Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e",
"redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap 1",
"redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"toolbox",
"redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"vi ~/.toolboxrc",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3",
"toolbox",
"oc get clusterversion",
"oc describe clusterversion",
"ssh <user_name>@<load_balancer> systemctl status haproxy",
"ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'",
"ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'",
"dig <wildcard_fqdn> @<dns_server>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1",
"./openshift-install create ignition-configs --dir=./install_dir",
"tail -f ~/<installation_directory>/.openshift_install.log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service",
"curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1",
"grep -is 'bootstrap.ign' /var/log/httpd/access_log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"curl -I http://<http_server_fqdn>:<port>/master.ign 1",
"grep -is 'master.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <master_node>",
"oc get daemonsets -n openshift-sdn",
"oc get pods -n openshift-sdn",
"oc logs <sdn_pod> -n openshift-sdn",
"oc get network.config.openshift.io cluster -o yaml",
"./openshift-install create manifests",
"oc get pods -n openshift-network-operator",
"oc logs pod/<network_operator_pod_name> -n openshift-network-operator",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/master",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get pods -n openshift-etcd",
"oc get pods -n openshift-etcd-operator",
"oc describe pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -c <container_name> -n <namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'",
"curl -I http://<http_server_fqdn>:<port>/worker.ign 1",
"grep -is 'worker.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <worker_node>",
"oc get pods -n openshift-machine-api",
"oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy",
"oc adm node-logs --role=worker -u kubelet",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=worker -u crio",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=worker --path=sssd",
"oc adm node-logs --role=worker --path=sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/worker",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get clusteroperators",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc describe clusteroperator <operator_name>",
"oc get pods -n <operator_namespace>",
"oc describe pod/<operator_pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -n <operator_namespace>",
"oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>",
"oc adm release info <image_path>:<tag> --commits",
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address>\" 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active kubelet",
"systemctl status kubelet",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active crio",
"systemctl status crio.service",
"oc adm node-logs --role=master -u crio",
"oc adm node-logs <node_name> -u crio",
"ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory",
"can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.",
"oc adm cordon <nodename>",
"oc adm drain <nodename> --ignore-daemonsets --delete-emptydir-data",
"ssh [email protected] sudo -i",
"systemctl stop kubelet",
".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done",
"crictl rmp -fa",
"systemctl stop crio",
"crio wipe -f",
"systemctl start crio systemctl start kubelet",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.22.0-rc.0+75ee307",
"oc adm uncordon <nodename>",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.22.0-rc.0+75ee307",
"rpm-ostree kargs --append='crashkernel=256M'",
"systemctl enable kdump.service",
"systemctl reboot",
"variant: openshift version: 4.9.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true",
"butane 99-worker-kdump.bu -o 99-worker-kdump.yaml",
"oc create -f ./99-worker-kdump.yaml",
"systemctl --failed",
"journalctl -u <unit>.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-nodenet-override spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_script> mode: 0755 overwrite: true path: /usr/local/bin/override-node-ip.sh systemd: units: - contents: | [Unit] Description=Override node IP detection Wants=network-online.target Before=kubelet.service After=network-online.target [Service] Type=oneshot ExecStart=/usr/local/bin/override-node-ip.sh ExecStart=systemctl daemon-reload [Install] WantedBy=multi-user.target enabled: true name: nodenet-override.service",
"E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]",
"oc debug node/<node_name>",
"chroot /host",
"ovs-appctl vlog/list",
"console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO",
"Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg",
"systemctl daemon-reload",
"systemctl restart ovs-vswitchd",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service",
"oc apply -f 99-change-ovs-loglevel.yaml",
"oc adm node-logs <node_name> -u ovs-vswitchd",
"journalctl -b -f -u ovs-vswitchd.service",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> ls -alh /var/log",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name><local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"oc debug node/my-cluster-node",
"chroot /host",
"crictl ps",
"crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'",
"nsenter -n -t 31150 -- ip ad",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator",
"ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"C:\\> net user <username> * 1",
"oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods",
"oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker",
"C:\\> powershell",
"C:\\> Get-EventLog -LogName Application -Source Docker",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10,count by (job)({__name__=~\".+\"}))",
"oc <options> --loglevel <log_level>",
"oc whoami -t"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/support/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/con_making-open-source-more-inclusive |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/con_making-open-source-more-inclusive |
8.133. man-pages-fr | 8.133. man-pages-fr 8.133.1. RHBA-2014:0637 - man-pages-fr bug fix update An updated man-pages-fr package that fixes one bug is now available for Red Hat Enterprise Linux 6. The man-pages-fr package contains a collection of manual pages translated into French. Bug Fix BZ# 891278 This update of the man-pages-fr package adds a warning that the French man page for the xinetd service includes options that are out of date. Users of man-pages-fr are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/man-pages-fr |
Observability | Observability Red Hat OpenShift Serverless 1.35 Observability features including administrator and developer metrics, cluster logging, and tracing Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/observability/index |
Chapter 3. API configuration examples | Chapter 3. API configuration examples 3.1. external_registry_config object reference { "is_enabled": True, "external_reference": "quay.io/redhat/quay", "sync_interval": 5000, "sync_start_date": datetime(2020, 0o1, 0o2, 6, 30, 0), "external_registry_username": "fakeUsername", "external_registry_password": "fakePassword", "external_registry_config": { "verify_tls": True, "unsigned_images": False, "proxy": { "http_proxy": "http://insecure.proxy.corp", "https_proxy": "https://secure.proxy.corp", "no_proxy": "mylocalhost", }, }, } 3.2. rule_rule object reference { "root_rule": {"rule_kind": "tag_glob_csv", "rule_value": ["latest", "foo", "bar"]}, } | [
"{ \"is_enabled\": True, \"external_reference\": \"quay.io/redhat/quay\", \"sync_interval\": 5000, \"sync_start_date\": datetime(2020, 0o1, 0o2, 6, 30, 0), \"external_registry_username\": \"fakeUsername\", \"external_registry_password\": \"fakePassword\", \"external_registry_config\": { \"verify_tls\": True, \"unsigned_images\": False, \"proxy\": { \"http_proxy\": \"http://insecure.proxy.corp\", \"https_proxy\": \"https://secure.proxy.corp\", \"no_proxy\": \"mylocalhost\", }, }, }",
"{ \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [\"latest\", \"foo\", \"bar\"]}, }"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_api_guide/api-config-examples |
Chapter 2. System Requirements | Chapter 2. System Requirements This chapter outlines the minimum hardware and software requirements to install Red Hat Gluster Storage Web Administration. Important Ensure that all the requirements are met before the installation starts. Missing requirements can result in Red Hat Gluster Storage Web Administration environment not functioning as expected. The Red Hat Gluster Storage Web Administration environment requires: One machine to act as the management server One or more machines to act as storage servers. At least three machines are required to support replicated volumes One or more machines to be used as clients to access the Web Administration interface 2.1. Requirements for Web Administration Server System On the system to be designated as the Web Administration server, verify that these recommended hardware and software requirements are met. 2.1.1. Hardware Requirements The following are the different hardware requirements based on different cluster configurations: 2.1.1.1. Small Cluster Configuration Number of nodes: upto 8 nodes Number of volumes: upto 6-8 volumes per cluster Number of bricks per node for replicated volumes: upto 2-3 bricks Number of bricks per node for Erasure Coded volumes: upto 12-36 bricks Recommended Requirements 4 vCPUs 4 GB of available system RAM One Network Interface Card (NIC) with bandwidth of at least 1 Gbps Additional Storage Devices For hosting etcd data directory: Storage disk size: 20 GB per cluster Filesystem format: XFS Mounting directory: /var/lib/etcd For hosting time-series data from Graphite, Carbon, and Whisper applications: Storage disk size: 200 GB per cluster Filesystem format: XFS Mounting directory: /var/lib/carbon Note For more information on how to prepare and mount the additional disks, see the Creating a Partition and Mounting a File System sections in the Red Hat Enterprise Linux Storage Administration Guide . 2.1.1.2. Medium Cluster Configuration Number of nodes: 9-16 nodes Number of volumes: upto 6-8 volumes per cluster Number of bricks per node for replicated volumes: upto 2-3 bricks Number of bricks per node for Erasure Coded volumes: upto 12-36 bricks Recommended Requirements 4 vCPUs 6 GB of available system RAM One Network Interface Card (NIC) with bandwidth of at least 1 Gbps Additional Storage Devices For hosting etcd data directory: Storage disk size: 20 GB per cluster Filesystem format: XFS Mounting directory: /var/lib/etcd For hosting time-series data from Graphite, Carbon, and Whisper applications: Storage disk size: 350 GB per cluster Filesystem format: XFS Mounting directory: /var/lib/carbon 2.1.1.3. Large Cluster Configuration Number of nodes: 17-24 nodes Number of volumes: upto 6-8 volumes per cluster Number of bricks per node for replicated volumes: upto 2-3 bricks Number of bricks per node for Erasure Coded volumes: upto 12-36 bricks Recommended Requirements 6 vCPUs 6 GB of available system RAM One Network Interface Card (NIC) with bandwidth of at least 1 Gbps Additional Storage Devices For hosting etcd data directory: Storage disk size: 20 GB per cluster Filesystem format: XFS Mounting directory: /var/lib/etcd For hosting time-series data from Graphite, Carbon, and Whisper applications: Storage disk size: 500 GB per cluster Filesystem format: XFS Mounting directory: /var/lib/carbon 2.1.2. Software Requirements Red Hat Gluster Storage Web Administration is supported on Red Hat Enterprise Linux 7.5 or later 64-bit version. Table 2.1. Software Requirements Software Name and Version Operating System Red Hat Enterprise Linux 7.5 or later 2.2. Requirements for Red Hat Gluster Storage Nodes Ensure the following requirements are met on the Red Hat Gluster Storage nodes: Note Red Hat Gluster Storage Web Administration is not supported on new installations of Red Hat Gluster Storage 3.5.2 on Red Hat Enterprise Linux 8. Red Hat Gluster Storage server on Red Hat Enterprise Linux 8 and Red Hat Gluster Storage Web Administration on Red Hat Enterprise Linux 7 is not supported. Red Hat Enterprise Linux 7.5 or later. Red Hat Gluster Storage servers updated to the latest Red Hat Gluster Storage version 3.5 or greater. For detailed instructions on the upgrade process, see the Upgrading Red Hat Storage section in the Red Hat Gluster Storage Installation Guide. Minimum hardware requirements Note For more information, see the knowledge base article on Red Hat Gluster Storage Hardware Compatibility . Network Time Protocol (NTP) setup Firewall access to ports For detailed information on prerequisites and setting up Red Hat Gluster Storage server, see the Red Hat Gluster Storage 3.5 Installation Guide . 2.3. Requirements for the Client System The Red Hat Gluster Storage Web Administration environment can be accessed by a client machine with the following web browser compatibility: Table 2.2. Web Browser Compatibility Software Name and Version Web Browser Mozilla Firefox 38.7.0 or later Web Browser Google Chrome 46 or later 2.4. Firewall Configuration Automated Firewall Setup In this version of Red Hat Gluster Web Administration, firewall configuration is automated by Ansible automation. The tendrl-ansible installer configures the firewall during Web Administration installation as the variable *configure_firewalld_for_tendrl* is set to True by default. This automation opens all the required ports for the Web Administration environment. To automatically configure the firewall, follow the Web Administration installation process. See the Web Administration installation section in the Quick Start Guide for details. Note For tendrl-ansible to automate firewall setup, ensure the firewalld service is configured and enabled. For instructions, see Using firewalls in the Red Hat Enterprise Linux 7 Security Guide . Manual Firewall Setup To manually configure firewall for Web Administration services: Open the required ports before continuing the installation process Set the variable configure_firewalld_for_tendrl to False in the [all:vars] section of the inventory file which will be applied to both the groups: tendrl_server and gluster_servers . See sample variables described in Sample Inventory Variables at the end of 3.5 Web Administration Installation procedure of this guide. Note The inventory file is created as part of the Web Administration Ansible installation process. Follow Web Administration Installation procedure of this guide. The list of the ports and the port numbers are given in the table below: Table 2.3. Web Administration Port Numbers TCP Port Numbers Usage 2379 For etcd 2003 For Graphite 80 or 443 For tendrl http or https 8789 For tendrl-monitoring-integration NOTE: If you are updating to Web Administration 3.5 Update 2 or higher from versions, you no longer need to open TCP port 3000 on the Web Administration server. If you are updating to Web Administration 3.5 Update 3 or higher from versions, you no longer need to open TCP port 10080 on the Web Administration server. Access to Graphite-web TCP port 10080 is unencrypted, you can open it if required. To use Firewalld to open a particular port, run: To use iptables to open a particular port, run: Note To be able to execute the iptables commands successfully, ensure the iptables-services package is installed. To install the iptables-services package, run yum install iptables-services . | [
"firewall-cmd --zone=zone_name --add-port=5667/tcp firewall-cmd --zone=zone_name --add-port=5667/tcp --permanent",
"iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT service iptables save"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/quick_start_guide/system_requirements |
23.9. NUMA Node Tuning | 23.9. NUMA Node Tuning After NUMA node tuning is done using virsh edit , the following domain XML parameters are affected: <domain> ... <numatune> <memory mode="strict" nodeset="1-4,^3"/> </numatune> ... </domain> Figure 23.11. NUMA node tuning Although all are optional, the components of this section of the domain XML are as follows: Table 23.6. NUMA node tuning elements Element Description <numatune> Provides details of how to tune the performance of a NUMA host physical machine by controlling NUMA policy for domain processes. <memory> Specifies how to allocate memory for the domain processes on a NUMA host physical machine. It contains several optional attributes. The mode attribute can be set to interleave , strict , or preferred . If no value is given it defaults to strict . The nodeset attribute specifies the NUMA nodes, using the same syntax as the cpuset attribute of the <vcpu> element. Attribute placement can be used to indicate the memory placement mode for the domain process. Its value can be either static or auto . If the <nodeset> attribute is specified it defaults to the <placement> of <vcpu> , or static . auto indicates the domain process will only allocate memory from the advisory nodeset returned from querying numad and the value of the nodeset attribute will be ignored if it is specified. If the <placement> attribute in vcpu is set to auto , and the <numatune> attribute is not specified, a default <numatune> with <placement> auto and strict mode will be added implicitly. | [
"<domain> <numatune> <memory mode=\"strict\" nodeset=\"1-4,^3\"/> </numatune> </domain>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-NUMA_node_tuning |
Chapter 4. OAuthClientAuthorization [oauth.openshift.io/v1] | Chapter 4. OAuthClientAuthorization [oauth.openshift.io/v1] Description OAuthClientAuthorization describes an authorization created by an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this authorization kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata scopes array (string) Scopes is an array of the granted scopes. userName string UserName is the user name that authorized this client userUID string UserUID is the unique UID associated with this authorization. UserUID and UserName must both match for this authorization to be valid. 4.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclientauthorizations DELETE : delete collection of OAuthClientAuthorization GET : list or watch objects of kind OAuthClientAuthorization POST : create an OAuthClientAuthorization /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations GET : watch individual changes to a list of OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclientauthorizations/{name} DELETE : delete an OAuthClientAuthorization GET : read the specified OAuthClientAuthorization PATCH : partially update the specified OAuthClientAuthorization PUT : replace the specified OAuthClientAuthorization /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations/{name} GET : watch changes to an object of kind OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/oauth.openshift.io/v1/oauthclientauthorizations HTTP method DELETE Description delete collection of OAuthClientAuthorization Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClientAuthorization Table 4.3. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorizationList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClientAuthorization Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body OAuthClientAuthorization schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 202 - Accepted OAuthClientAuthorization schema 401 - Unauthorized Empty 4.2.2. /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations HTTP method GET Description watch individual changes to a list of OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/oauth.openshift.io/v1/oauthclientauthorizations/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the OAuthClientAuthorization HTTP method DELETE Description delete an OAuthClientAuthorization Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClientAuthorization Table 4.11. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClientAuthorization Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClientAuthorization Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body OAuthClientAuthorization schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK OAuthClientAuthorization schema 201 - Created OAuthClientAuthorization schema 401 - Unauthorized Empty 4.2.4. /apis/oauth.openshift.io/v1/watch/oauthclientauthorizations/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the OAuthClientAuthorization HTTP method GET Description watch changes to an object of kind OAuthClientAuthorization. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/oauth_apis/oauthclientauthorization-oauth-openshift-io-v1 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Provide as much detail as possible so that your request can be addressed. Prerequisites You have a Red Hat account. If you do not have a Red Hat account, you can create one by clicking Register on the Red Hat Customer Portal home page. You are logged in to your Red Hat account. Procedure To provide your feedback, click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide more details about the issue or enhancement in the Description text box. If your Red Hat user name does not automatically appear in the Reporter text box, enter it. Scroll to the bottom of the page and then click the Create button. A documentation issue is created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/user_access_configuration_guide_for_role-based_access_control_rbac_with_fedramp/proc-providing-feedback-on-redhat-documentation |
Chapter 13. Real-Time Kernel | Chapter 13. Real-Time Kernel About Red Hat Enterprise Linux for Real Time Kernel The Red Hat Enterprise Linux for Real Time Kernel is designed to enable fine-tuning for systems with extremely high determinism requirements. The major increase in the consistency of results can, and should, be achieved by tuning the standard kernel. The real-time kernel enables gaining a small increase on top of increase achieved by tuning the standard kernel. The real-time kernel is available in the rhel-7-server-rt-rpms repository. The Installation Guide contains the installation instructions and the rest of the documentation is available at Product Documentation for Red Hat Enterprise Linux for Real Time . The can-dev module has been enabled for the real-time kernel The can-dev module has been enabled for the real-time kernel, providing the device interface for Controller Area Network (CAN) device drivers. CAN is a vehicle bus specification originally intended to connect the various micro-controllers in automobiles and has since extended to other areas. CAN is also used in industrial and machine controls where a high performance interface is required and other interfaces such as RS-485 are not sufficient. The functions exported from the can-dev module are used by CAN device drivers to make the kernel aware of the devices and to allow applications to connect and transfer data. Enabling CAN in the real-time kernel allows the use of third party CAN drivers and applications to implement CAN-based systems. (BZ# 1328607 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/new_features_real-time_kernel |
Chapter 1. Red Hat Software Collections 3.2 | Chapter 1. Red Hat Software Collections 3.2 This chapter serves as an overview of the Red Hat Software Collections 3.2 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.2 is be available for Red Hat Enterprise Linux 7; selected new components and previously released components also for Red Hat Enterprise Linux 6. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections 3.2 Components" lists all components that are supported at the time of the Red Hat Software Collections 3.2 release. Table 1.1. Red Hat Software Collections 3.2 Components Component Software Collection Description Red Hat Developer Toolset 8.0 devtoolset-8 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.24.0 rh-perl524 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl524 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. Perl 5.26.1 [a] rh-perl526 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl526 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. The rh-perl526 packaging is aligned with upstream; the perl526-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.0.27 rh-php70 A release of PHP 7.0 with PEAR 1.10, enhanced language features and performance improvement . PHP 7.1.8 [a] rh-php71 A release of PHP 7.1 with PEAR 1.10, APCu 5.1.8, and enhanced language features. PHP 7.2.10 [a] rh-php72 A release of PHP 7.2 with PEAR 1.10.5, APCu 5.1.12, and enhanced language features. Python 2.7.13 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.5.1 rh-python35 The rh-python35 Software Collection contains Python 3.5.1 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Python 3.6.3 rh-python36 The rh-python36 Software Collection contains Python 3.6.3, which introduces a number of new features, such as f-strings, syntax for variable annotations, and asynchronous generators and comprehensions . In addition, a set of extension libraries useful for programming web applications is included, with mod_wsgi (supported only together with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Ruby 2.3.6 rh-ruby23 A release of Ruby 2.3. This version introduces a command-line option to freeze all string literals in the source files, a safe navigation operator, and multiple performance enhancements , while maintaining source-level backward compatibility with Ruby 2.2, Ruby 2.0.0, and Ruby 1.9.3. Ruby 2.4.3 rh-ruby24 A release of Ruby 2.4. This version provides multiple performance improvements and enhancements, for example improved hash table, new debugging features, support for Unicode case mappings, and support for OpenSSL 1.1.0 . Ruby 2.4.0 maintains source-level backward compatibility with Ruby 2.3, Ruby 2.2, Ruby 2.0.0, and Ruby 1.9.3. Ruby 2.5.0 [a] rh-ruby25 A release of Ruby 2.5. This version provides multiple performance improvements and new features, for example, simplified usage of blocks with the rescue , else , and ensure keywords, a new yield_self method, support for branch coverage and method coverage measurement, new Hash#slice and Hash#transform_keys methods . Ruby 2.5.0 maintains source-level backward compatibility with Ruby 2.4. Ruby on Rails 4.2.6 rh-ror42 A release of Ruby on Rails 4.2, a web application framework written in the Ruby language. Highlights in this release include Active Job, asynchronous mails, Adequate Record, Web Console, and foreign key support . This Software Collection is supported together with the rh-ruby23 and rh-nodejs4 Collections. Ruby on Rails 5.0.1 rh-ror50 A release of Ruby on Rails 5.0, the latest version of the web application framework written in the Ruby language. Notable new features include Action Cable, API mode, exclusive use of rails CLI over Rake, and ActionRecord attributes. This Software Collection is supported together with the rh-ruby24 and rh-nodejs6 Collections. Scala 2.10.6 [a] rh-scala210 A release of Scala, a general purpose programming language for the Java platform, which integrates features of object-oriented and functional languages. MariaDB 10.1.29 rh-mariadb101 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds the Galera Cluster support . MariaDB 10.2.8 rh-mariadb102 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds MariaDB Backup, Flashback, support for Recursive Common Table Expressions, window functions, and JSON functions . MongoDB 3.2.10 rh-mongodb32 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database . This Software Collection includes the mongo-java-driver package version 3.2.1. MongoDB 3.4.9 rh-mongodb34 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces support for new architectures, adds message compression and support for the decimal128 type, enhances collation features and more. MongoDB 3.6.3 [a] rh-mongodb36 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces change streams, retryable writes, and JSON Schema , as well as other features. MySQL 5.7.24 rh-mysql57 A release of MySQL, which provides a number of new features and enhancements, including improved performance. MySQL 8.0.13 [a] rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 9.5.14 rh-postgresql95 A release of PostgreSQL, which provides a number of enhancements, including row-level security control, introduces replication progress tracking, improves handling of large tables with high number of columns, and improves performance for sorting and multi-CPU machines. PostgreSQL 9.6.10 rh-postgresql96 A release of PostgreSQL, which introduces parallel execution of sequential scans, joins, and aggregates, and provides enhancements to synchronous replication, full-text search, deration driver, postgres_fdw, as well as performance improvements. PostgreSQL 10.5 [a] rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . Node.js 6.11.3 rh-nodejs6 A release of Node.js, which provides multiple API enhancements, performance and security improvements, ECMAScript 2015 support , and npm 3.10.9 . Node.js 8.11.4 [a] rh-nodejs8 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.0, npm 5.6.0 and npx, enhanced security, experimental N-API support, and performance improvements. Node.js 10.10.0 [a] rh-nodejs10 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.6, full N-API support , and stability improvements. nginx 1.10.2 rh-nginx110 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces a number of new features, including dynamic module support, HTTP/2 support, Perl integration, and numerous performance improvements . nginx 1.12.1 [a] rh-nginx112 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces a number of new features, including IP Transparency, improved TCP/UDP load balancing, enhanced caching performance, and numerous performance improvements . nginx 1.14.0 [a] rh-nginx114 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version provides a number of features, such as mirror module, HTTP/2 server push, gRPC proxy module, and numerous performance improvements . Apache httpd 2.4.34 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb and mod_auth_mellon modules are also included. Varnish Cache 5.2.1 [a] rh-varnish5 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes the shard director, experimental HTTP/2 support, and improvements to Varnish configuration through separate VCL files and VCL labels. Varnish Cache 6.0.0 [a] rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.3.9 rh-maven33 A release of Maven, a software project management and comprehension tool used primarily for Java projects. This version provides various enhancements, for example, improved core extension mechanism . Maven 3.5.0 [a] rh-maven35 A release of Maven, a software project management and comprehension tool. This release introduces support for new architectures and a number of new features, including colorized logging . Git 2.18.1 [a] rh-git218 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version includes the Large File Storage (LFS) extension . Redis 3.2.4 rh-redis32 A release of Redis 3.2, a persistent key-value database . HAProxy 1.8.4 [a] rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. Common Java Packages rh-java-common This Software Collection provides common Java libraries and tools used by other collections. The rh-java-common Software Collection is required by the devtoolset-4 , devtoolset-3 , rh-maven33 , maven30 , rh-mongodb32 , rh-mongodb26 , thermostat1 , rh-thermostat16 , and rh-eclipse46 components and it is not supposed to be installed directly by users. JDK Mission Control [a] rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 11 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven35 Software Collection. [a] This Software Collection is available only for Red Hat Enterprise Linux 7 Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.2 Red Hat Developer Toolset 8.0 devtoolset-8 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le PHP 7.2.10 rh-php72 RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.13 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.10.0 rh-nodejs10 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.0 rh-nginx114 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 6.0.0 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.1 rh-git218 RHEL7 x86_64, s390x, aarch64, ppc64le JDK Mission Control rh-jmc RHEL7 x86_64 Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.2 Apache httpd 2.4.34 httpd24 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.1 rh-perl526 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.0 rh-ruby25 RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.5 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.4 rh-haproxy18 RHEL7 x86_64 PHP 7.0.27 rh-php70 RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.3 rh-python36 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.8 rh-mariadb102 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.10 rh-postgresql96 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 RHEL7 x86_64 nginx 1.10.2 rh-nginx110 RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.3 rh-ruby24 RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.13 python27 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 RHEL6, RHEL7 x86_64 Ruby 2.3.6 rh-ruby23 RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD64 and Intel 64 architectures s390x - IBM Z aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.2 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures; certain Software Collections are available also for Red Hat Enterprise Linux 6. In addition, Red Hat Software Collections 3.2 supports the following architectures on Red Hat Enterprise Linux 7: The 64-bit ARM architecture IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.2 adds these new Software Collections: devtoolset-8 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-php72 - see Section 1.3.3, "Changes in PHP" rh-mysql80 - see Section 1.3.4, "Changes in MySQL" rh-nodejs10 - see Section 1.3.5, "Changes in Node.js" rh-nginx114 - see Section 1.3.6, "Changes in nginx" rh-varnish6 - see Section 1.3.7, "Changes in Varnish Cache" rh-git218 - see Section 1.3.8, "Changes in Git" rh-jmc - JDK Mission Control (JMC) is a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. The tool chain enables developers and administrators to collect and analyze data from Java applications running locally or deployed in production environments. Note that JMC requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 11 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection is available with the RHEA-2019:0543 advisory. All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following component has been updated in Red Hat Software Collections 3.2: httpd24 - see Section 1.3.9, "Changes in Apache httpd" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.2: rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 rhscl/varnish-6-rhel7 The following container images have been updated in Red Hat Software Collections 3.2: rhscl/httpd-24-rhel7 For detailed information regarding Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 8.0 compared to the release of Red Hat Developer Toolset: GCC to version 8.2.1 GDB to version 8.2 Valgrind to version 3.14.0 elfutils to version 0.174 binutils to version 2.30 strace to version 4.24 OProfile to version 1.3.0 SystemTap to version 3.3 In addition, bug fix updates are available for the following components: dwz ltrace Dyninst For detailed information on changes in 8.0, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in PHP The new rh-php2 Software Collection includes PHP 7.2.10 with PEAR 1.10.5 , APCu 5.1.12 , and improved language features. This version introduces the following enhancements: Converting numeric keys in object-to-array and array-to-object casts Counting of non-countable objects A new object typehint HashContext changed from a resource to an object Improved TLS constants Performance improvements For detailed information on bug fixes and enhancements provided by rh-php72 , see the upstream change log . For information regarding migrating from PHP 7.1 to PHP 7.2, see the upstream migration guide . 1.3.4. Changes in MySQL The new rh-mysql80 Software Collection includes MySQL 8.0.13 , which introduces a number of new security and account management features and enhancements. Notable changes include: MySQL now incorporates a transactional data dictionary , which stores information about database objects. MySQL now supports roles , which are collections of privileges. The default character set has been changed from latin1 to utf8mb4 . Support for common table expressions , both nonrecursive and recursive, has been added. MySQL now supports window functions , which perform a calculation for each row from a query, using related rows. InnoDB now supports the NOWAIT and SKIP LOCKED options with locking read statements. GIS-related functions have been improved. JSON functionality has been enhanced. For detailed changes, see the upstream documentation: What Is New in MySQL 8.0 and Changes in MySQL 8.0 . For migration instructions, refer to Section 5.2, "Migrating to MySQL 8.0" . Notable differences between upstream MySQL 8.0 and rh-mysql80 The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 Software Collection. 1.3.5. Changes in Node.js The new rh-nodejs10 Software Collection provides Node.js 10.10.0 with npm 6.4.1 . Notable features in this release include: V8 engine version 6.6 Support for N-API is no longer experimental Stability improvements Enhanced security features For detailed changes in Node.js 10.10.0 , see the upstream release notes and upstream documentation . 1.3.6. Changes in nginx The new rh-nginx114 Software Collection includes nginx 1.14.0 , which provides a number of performance improvements, bug fixes, and new features, such as: The mirror module The gRPC proxy module HTTP/2 server push Improvements to Vim syntax highlighting scripts For more information regarding changes in nginx , refer to the upstream release notes . For migration instructions, see Section 5.8, "Migrating to nginx 1.14" 1.3.7. Changes in Varnish Cache Varnish Cache 6.0.0 , included in the new rh-varnish6 Software Collection, provides a number of bug fixes and enhancements over the previously released version. For example: Support for Unix Domain Sockets (UDS), both for clients and for back-end servers A new level of the Varnish Configuration Language (VCL), vcl 4.1 Improvements to HTTP/2 support New and improved Varnish Modules (VMODs): vmod_directors vmod_proxy vmod_unix vmod_vtc For detailed changes in Varnish Cache 6.0.0 , refer to the upstream change log . See also the upstream documentation and upgrading notes . 1.3.8. Changes in Git The new rh-git218 Software Collection includes Git 2.18.1 , which provides numerous bug fixes and new features compared to the rh-git29 Collection released with Red Hat Software Collections 2.3. Notable changes specific to the rh-git218 Software Collection include: The lfs extension has been added and it is installed by default with rh-git218 . Git Large File Storage (LFS) replaces large files with text pointers inside Git and stores the file contents on a remote server. A new rh-git218-git-instaweb subpackage is available, which depends on the base Red Hat Enterprise Linux version of Apache HTTP server. When the rh-git218-git-instaweb package is installed, the git instaweb command works with the web server with no further configuration. For detailed list of further enhancements, bug fixes, and backward compatibility notes related to Git 2.18.1 , see the upstream release notes . See also the Git manual page for version 2.18.1. 1.3.9. Changes in Apache httpd The Apache HTTP Server , provided by the httpd24 Software Collection, has been updated to upstream version 2.4.34. Notable changes include: HTTP/2 support has been improved. Additional features provided by OpenSSL 1.0.2 have been implemented. This update adds the mod_md module to the httpd24 Software Collection. The module enables managing domains across virtual hosts and certificate provisioning using the Automatic Certificate Management Environment (ACME) protocol. The mod_md module is available only for Red Hat Enterprise Linux 7. The handling of TLS Server Name Indication (SNI) hints in the Apache HTTP Server has changed. If the SNI hint given in the TLS handshake does not match the Host: header in the HTTP request, an HTTP 421 Misdirected Request error response is now sent by the server instead of the 400 Bad Request error response. If the SNI hint does not match the server name of a configured VirtualHost , the usual VirtualHost matching rules are now followed, that is, matching the first configured host. Previously, a 400 Bad Request error response was sent. For more information on changes in Apache httpd 2.4.34 , see the upstream release notes . 1.4. Compatibility Information Red Hat Software Collections 3.2 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. Certain components are available also for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues rh-mysql80 , BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. rh-mysql80 , BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component The updated version of the cURL tool included in the httpd24 Software Collection does not support HTTP/2. Consequently, scripts reliant on HTTP/2 support in this version of cURL fail, or fall back to HTTP/1.1. httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1327548 The mod_ssl module does not support the ALPN protocol on Red Hat Enterprise Linux 6, or on Red Hat Enterprise Linux 7.3 and earlier. Consequently, clients that support upgrading TLS connections to HTTP/2 only using ALPN are limited to HTTP/1.1 support. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. rh-python35 , rh-python36 components, BZ# 1499990 The pytz module, which is used by Babel for time zone support, is not included in the rh-python35 , and rh-python36 Software Collections. Consequently, when the user tries to import the dates module from Babel , a traceback is returned. To work around this problem, install pytz through the pip package manager from the pypi public repository by using the pip install pytz command. rh-python36 component Certain complex trigonometric functions provided by numpy might return incorrect values on the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. The AMD64 and Intel 64 architectures are not affected by this problem. python27 component, BZ# 1330489 The python27-python-pymongo package has been updated to version 3.2.1. Note that this version is not fully compatible with the previously shipped version 2.5.2. python27 component In Red Hat Enterprise Linux 7, when the user tries to install the python27-python-debuginfo package, the /usr/src/debug/Python-2.7.5/Modules/socketmodule.c file conflicts with the corresponding file from the python-debuginfo package installed on the core system. Consequently, installation of the python27-python-debuginfo fails. To work around this problem, uninstall the python-debuginfo package and then install the python27-python-debuginfo package. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. rh-ruby24 , rh-ruby23 components Determination of RubyGem installation paths is dependent on the order in which multiple Software Collections are enabled. The required order has been changed since Ruby 2.3.1 shipped in Red Hat Software Collections 2.3 to support dependent Collections. As a consequence, RubyGem paths, which are used for gem installation during an RPM build, are invalid when the Software Collections are supplied in an incorrect order. For example, the build now fails if the RPM spec file contains scl enable rh-ror50 rh-nodejs6 . To work around this problem, enable the rh-ror50 Software Collection last, for example, scl enable rh-nodejs6 rh-ror50 . rh-maven35 , rh-maven33 components When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven35-maven-local package or rh-maven33-maven-local package , XMvn , a tool used for building Java RPM packages, run from the rh-maven35 or rh-maven33 Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. postgresql component The rh-postgresql9* packages for Red Hat Enterprise Linux 6 do not provide the sepgsql module as this feature requires installation of libselinux version 2.0.99, which is not available in Red Hat Enterprise Linux 6. httpd , mariadb , mongodb , mysql , nodejs , perl , php , python , ruby , and ror components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mongodb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , rh-ruby* , or rh-ror* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql , mongodb components Red Hat Software Collections 3.2 contains the MySQL 5.7 , MySQL 8.0 , MariaDB 10.0 , MariaDB 10.1 , MariaDB 10.2 , PostgreSQL 9.5 , PostgreSQL 9.6 , PostgreSQL 10 , MongoDB 3.2 , MongoDB 3.4 , and MongoDB 3.6 databases. The core Red Hat Enterprise Linux 6 provides earlier versions of the MySQL and PostgreSQL databases (client library and daemon). The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 9.2 client library with the PostgreSQL 9.4 or 9.5 daemon works as expected. The core Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 do not include the client library for MongoDB . In order to use this client library for your application, you should use the client library from Red Hat Software Collections and always use the scl enable ... call every time you run an application linked against this MongoDB client library. mariadb , mysql , mongodb components MariaDB, MySQL, and MongoDB do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . rh-eclipse46 component When a plug-in from a third-party update site is installed, Eclipse sometimes fails to start with a NullPointerException in the workspace log file. To work around this problem, restart Eclipse with the -clean option. For example: rh-eclipse46 component The Eclipse Docker Tooling introduces a Dockerfile editor with syntax highlighting and a basic command auto-completion. When the Build Image Wizard is open and the Edit Dockerfile button is pressed, the Dockerfile editor opens the file in a detached editor window. However, this window does not contain the Cancel and Save buttons. To work around this problem, press Ctrl + S to save your changes or right-click in the editor to launch a context menu, which offers the Save option. To cancel your changes, close the window. rh-eclipse46 component On Red Hat Enterprise Linux 7.2, a bug in the perf tool, which is used to populate the Perf Profile View in Eclipse , causes some of the items in the view not to be properly linked to their respective positions in the Eclipse Editor. While the profiling works as expected, it is not possible to navigate to related positions in the Editor by clicking on parts of the Perl Profile View . Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). 1.6. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl . | [
"[mysqld] default_authentication_plugin=caching_sha2_password",
"[mysqld] character-set-server=utf8",
"~]USD scl enable rh-eclipse46 \"eclipse -clean\"",
"ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems",
"Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'",
"Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user",
"su -l postgres -c \"scl enable rh-postgresql94 psql\"",
"scl enable rh-postgresql94 bash su -l postgres -c psql"
]
| https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-RHSCL |
Chapter 10. Configure 802.1Q VLAN tagging | Chapter 10. Configure 802.1Q VLAN tagging To create a VLAN, an interface is created on top of another interface referred to as the parent interface . The VLAN interface will tag packets with the VLAN ID as they pass through the interface, and returning packets will be untagged. VLAN interface can be configured similarly to any other interface. The parent interface does not need to be an Ethernet interface. An 802.1Q VLAN tagging interface can be created on top of bridge, bond, and team interfaces, however there are some things to note: In the case of VLANs over bonds, it is important that the bond has ports and that they are " up " before opening the VLAN interface. Adding a VLAN interface to a bond without ports does not work. A VLAN port cannot be configured on a bond with the fail_over_mac=follow option, because the VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, traffic would still be sent with the now incorrect source MAC address. Sending VLAN tagged packets through a network switch requires the switch to be properly configured. For example, ports on Cisco switches must be assigned to one VLAN or be configured as trunk ports to accept tagged packets from multiple VLANs. Some vendor switches allow untagged frames of the native VLAN to be processed by a trunk port. Some devices allow you to enable or disable the native VLAN , other devices have it disabled by default. Consequence of this disparity may result in native VLAN misconfiguration between two different switches, posing a security risk. For example: One switch uses native VLAN 1 while the other uses native VLAN 10 . If the frames are allowed to pass without the tag being inserted, an attacker is able to jump VLANs - this common network penetration technique is also known as VLAN hopping . To minimize security risks, configure your interface as follows: Switches Unless you need them, disable trunk ports. If you need trunk ports, disable native VLAN , so that untagged frames are not allowed. Red Hat Enterprise Linux server Use the nftables or ebtables utilities to drop untagged frames in ingress filtering. Some older network interface cards, loopback interfaces, Wimax cards, and some InfiniBand devices, are said to be VLAN challenged , meaning they cannot support VLANs. This is usually because the devices cannot cope with VLAN headers and the larger MTU size associated with tagged packets. Note Bonding on top of VLAN is not supported by Red Hat. See the Red Hat Knowledgebase article Whether configuring bond on top of VLAN as port interfaces is a valid configuration? for more information. 10.1. Selecting VLAN Interface Configuration Methods To configure a VLAN interface using NetworkManager 's text user interface tool, nmtui , proceed to Section 10.2, "Configure 802.1Q VLAN tagging Using the Text User Interface, nmtui" To configure a VLAN interface using NetworkManager 's command-line tool, nmcli , proceed to Section 10.3, "Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli" To configure a network interface manually , see Section 10.4, "Configure 802.1Q VLAN Tagging Using the Command Line" . To configure a network using graphical user interface tools , proceed to Section 10.5, "Configure 802.1Q VLAN Tagging Using a GUI" | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_802_1q_vlan_tagging |
Migration Toolkit for Containers | Migration Toolkit for Containers OpenShift Container Platform 4.12 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/index |
Chapter 2. Differences from upstream OpenJDK 11 | Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.23/rn-openjdk-diff-from-upstream |
Chapter 14. Installing a cluster using AWS Local Zones | Chapter 14. Installing a cluster using AWS Local Zones In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) into an existing VPC, extending workers to the edge of the Cloud Infrastructure using AWS Local Zones. After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge worker nodes to create user workloads in Local Zone subnets. AWS Local Zones are a type of infrastructure that place Cloud Resources close to the metropolitan regions. For more information, see the AWS Local Zones Documentation . OpenShift Container Platform can be installed in existing VPCs with Local Zone subnets. The Local Zone subnets can be used to extend the regular workers' nodes to the edge networks. The edge worker nodes are dedicated to running user workloads. One way to create the VPC and subnets is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided as an example only. Installing a cluster with VPC you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. The CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 14.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You noted the region and supported AWS Local Zones locations to create the network resources in. You read the Features for each AWS Local Zones location. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 14.2. Cluster limitations in AWS Local Zones Some limitations exist when you attempt to deploy a cluster with a default installation configuration in Amazon Web Services (AWS) Local Zones. Important The following list details limitations when deploying a cluster in AWS Local Zones: The Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used on the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not supported in AWS Local Zones. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on Local Zone locations. By default, the nodes running in Local Zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass must be set when creating workloads on Local Zone nodes. Additional resources Storage classes 14.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 14.4. Opting into AWS Local Zones If you plan to create the subnets in AWS Local Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined into which region you will deploy your OpenShift Container Platform cluster. Procedure Export a variable to contain the name of the region in which you plan to deploy your OpenShift Container Platform cluster by running the following command: USD export CLUSTER_REGION="<region_name>" 1 1 For <region_name> , specify a valid AWS region name, such as us-east-1 . List the zones that are available in your region by running the following command: USD aws --region USD{CLUSTER_REGION} ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Depending on the region, the list of available zones can be long. The command will return the following fields: ZoneName The name of the Local Zone. GroupName The group that the zone is part of. You need to save this name to opt in. Status The status of the Local Zone group. If the status is not-opted-in , you must opt in the GroupName by running the commands that follow. Export a variable to contain the name of the Local Zone to host your VPC by running the following command: USD export ZONE_GROUP_NAME="<value_of_GroupName>" 1 1 The <value_of_GroupName> specifies the name of the group of the Local Zone you want to create subnets on. For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a , US East (New York). Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "USD{ZONE_GROUP_NAME}" \ --opt-in-status opted-in 14.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace worker nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS region. When creating the installation configuration file, ensure that you select the same AWS region that you specified when configuring your subscription. 14.6. Creating a VPC that uses AWS Local Zones You must create a Virtual Private Cloud (VPC), and subnets for each Local Zone location, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend worker nodes to the edge locations. You can further customize the VPC to meet your requirements, including VPN, route tables, and add new Local Zone subnets that are not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the AWS Local Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "AvailabilityZoneCount", 5 "ParameterValue": "3" 6 }, { "ParameterKey": "SubnetBits", 7 "ParameterValue": "12" 8 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The CIDR block for the VPC. 4 Specify a CIDR block in the format x.x.x.x/16-24 . 5 The number of availability zones to deploy the VPC in. 6 Specify an integer between 1 and 3 . 7 The size of each subnet in each availability zone. 8 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 14.6.1. CloudFormation template for the VPC that uses AWS Local Zones You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster that uses AWS Local Zones. Example 14.1. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: ClusterName: Type: String Description: ClusterName used to prefix resource names VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: ClusterName: default: "" AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-vpc" ] ] - Key: !Join [ "", [ "kubernetes.io/cluster/unmanaged" ] ] Value: "shared" PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-public-1" ] ] PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-public-2" ] ] PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-public-3" ] ] InternetGateway: Type: "AWS::EC2::InternetGateway" Properties: Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-igw" ] ] GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-rtb-public" ] ] PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-private-1" ] ] PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-rtb-private-1" ] ] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-natgw-private-1" ] ] EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-private-2" ] ] PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-rtb-private-2" ] ] PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-natgw-private-2" ] ] EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-eip-private-2" ] ] Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-private-3" ] ] PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-rtb-private-3" ] ] PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-natgw-private-3" ] ] EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Tags: - Key: Name Value: !Join [ "", [ !Ref ClusterName, "-eip-private-3" ] ] Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableId: Description: Private Route table ID Value: !Ref PrivateRouteTable 14.7. Creating a subnet in AWS Local Zones You must create a subnet in AWS Local Zones before you configure a worker machineset for your OpenShift Container Platform cluster. You must repeat the following process for each Local Zone you want to deploy worker nodes to. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zone group. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "VpcId", 3 "ParameterValue": "vpc-<random_string>" 4 }, { "ParameterKey": "PublicRouteTableId", 5 "ParameterValue": "<vpc_rtb_pub>" 6 }, { "ParameterKey": "LocalZoneName", 7 "ParameterValue": "<cluster_region_name>-<location_identifier>-<zone_identifier>" 8 }, { "ParameterKey": "LocalZoneNameShort", 9 "ParameterValue": "<lz_zone_shortname>" 10 }, { "ParameterKey": "PublicSubnetCidr", 11 "ParameterValue": "10.0.128.0/20" 12 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The VPC ID in which the Local Zone's subnet will be created. 4 Specify the VpcId value from the output of the CloudFormation template for the VPC. 5 The Public Route Table ID for the VPC. 6 Specify the PublicRouteTableId value from the output of the CloudFormation template for the VPC. 7 The Local Zone name that the VPC belongs to. 8 Specify the Local Zone that you opted your AWS account into, such as us-east-1-nyc-1a . 9 The shortname of the AWS Local Zone that the VPC belongs to. 10 Specify a short name for the AWS Local Zone that you opted your AWS account into, such as <zone_group_identified><zone_identifier> . For example, us-east-1-nyc-1a is shortened to nyc-1a . 11 The CIDR block to allow access to the Local Zone. 12 Specify a CIDR block in the format x.x.x.x/16-24 . Copy the template from the CloudFormation template for the subnet section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <subnet_stack_name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <subnet_stack_name> is the name for the CloudFormation stack, such as cluster-lz-<local_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-lz-nyc1/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <subnet_stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PublicSubnetIds The IDs of the new public subnets. 14.7.1. CloudFormation template for the subnet that uses AWS Local Zones You can use the following CloudFormation template to deploy the subnet that you need for your OpenShift Container Platform cluster that uses AWS Local Zones. Example 14.2. CloudFormation template for the subnet # CloudFormation template used to create Local Zone subnets and dependencies AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: ClusterName: Description: ClusterName used to prefix resource names Type: String VpcId: Description: VPC Id Type: String LocalZoneName: Description: Local Zone Name (Example us-east-1-bos-1) Type: String LocalZoneNameShort: Description: Short name for Local Zone used on tag Name (Example bos1) Type: String PublicRouteTableId: Description: Public Route Table ID to associate the Local Zone subnet Type: String PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for Public Subnet Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref LocalZoneName Tags: - Key: Name Value: !Join - "" - [ !Ref ClusterName, "-public-", !Ref LocalZoneNameShort, "-1" ] - Key: kubernetes.io/cluster/unmanaged Value: "true" PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId Outputs: PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ "", [!Ref PublicSubnet] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 14.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14.9. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 14.10. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) and use AWS Local Zones, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file and Kubernetes manifests. 14.10.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 14.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 14.10.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 14.3. Machine types based on 64-bit x86 architecture for AWS Local Zones c5.* c5d.* m6i.* m5.* r5.* t3.* Additional resources See AWS Local Zones features in the AWS documentation for more information about AWS Local Zones and the supported instances types and services. 14.10.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. The region that you specify must be the same region that contains the Local Zone that you opted into for your AWS account. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to provide the subnets for the availability zones that your VPC uses: platform: aws: subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 1 Add the subnets section and specify the PrivateSubnetIds and PublicSubnetIds values from the outputs of the CloudFormation template for the VPC. Do not include the Local Zone subnets here. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 14.10.4. Creating the Kubernetes manifest files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest files that the cluster needs to configure the machines. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. You installed the jq package. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster by running the following command: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Set the default Maximum Transmission Unit (MTU) according to the network plugin: Important Generally, the Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300. See How Local Zones work in the AWS documentation. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by your network plugin, for example: OVN-Kubernetes: 100 bytes OpenShift SDN: 50 bytes The network plugin could provide additional features, like IPsec, that also must be decreased the MTU. Check the documentation for additional information. If you are using the OVN-Kubernetes network plugin, enter the following command: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200 EOF If you are using the OpenShift SDN network plugin, enter the following command: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250 EOF Create the machine set manifests for the worker nodes in your Local Zone. Export a local variable that contains the name of the Local Zone that you opted your AWS account into by running the following command: USD export LZ_ZONE_NAME="<local_zone_name>" 1 1 For <local_zone_name> , specify the Local Zone that you opted your AWS account into, such as us-east-1-nyc-1a . Review the instance types for the location that you will deploy to by running the following command: USD aws ec2 describe-instance-type-offerings \ --location-type availability-zone \ --filters Name=location,Values=USD{LZ_ZONE_NAME} --region <region> 1 1 For <region> , specify the name of the region that you will deploy to, such as us-east-1 . Export a variable to define the instance type for the worker machines to deploy on the Local Zone subnet by running the following command: USD export INSTANCE_TYPE="<instance_type>" 1 1 Set <instance_type> to a tested instance type, such as c5d.2xlarge . Store the AMI ID as a local variable by running the following command: USD export AMI_ID=USD(grep ami <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml \ | tail -n1 | awk '{printUSD2}') Store the subnet ID as a local variable by running the following command: USD export SUBNET_ID=USD(aws cloudformation describe-stacks --stack-name "<subnet_stack_name>" \ 1 | jq -r '.Stacks[0].Outputs[0].OutputValue') 1 For <subnet_stack_name> , specify the name of the subnet stack that you created. Store the cluster ID as local variable by running the following command: USD export CLUSTER_ID="USD(awk '/infrastructureName: / {print USD2}' <installation_directory>/manifests/cluster-infrastructure-02-config.yml)" Create the worker manifest file for the Local Zone that your VPC uses by running the following command: USD cat <<EOF > <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} name: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} machine.openshift.io/cluster-api-machineset: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} template: metadata: labels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} machine.openshift.io/cluster-api-machine-role: edge machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} spec: metadata: labels: machine.openshift.com/zone-type: local-zone machine.openshift.com/zone-group: USD{ZONE_GROUP_NAME} node-role.kubernetes.io/edge: "" taints: - key: node-role.kubernetes.io/edge effect: NoSchedule providerSpec: value: ami: id: USD{AMI_ID} apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: USD{CLUSTER_ID}-worker-profile instanceType: USD{INSTANCE_TYPE} kind: AWSMachineProviderConfig placement: availabilityZone: USD{LZ_ZONE_NAME} region: USD{CLUSTER_REGION} securityGroups: - filters: - name: tag:Name values: - USD{CLUSTER_ID}-worker-sg subnet: id: USD{SUBNET_ID} publicIp: true tags: - name: kubernetes.io/cluster/USD{CLUSTER_ID} value: owned userDataSecret: name: worker-user-data EOF Additional resources Changing the MTU for the cluster network Enabling IPsec encryption 14.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. steps Creating user workloads in AWS Local Zones 14.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 14.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 14.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 14.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 14.16. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"export CLUSTER_REGION=\"<region_name>\" 1",
"aws --region USD{CLUSTER_REGION} ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones",
"export ZONE_GROUP_NAME=\"<value_of_GroupName>\" 1",
"aws ec2 modify-availability-zone-group --group-name \"USD{ZONE_GROUP_NAME}\" --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 5 \"ParameterValue\": \"3\" 6 }, { \"ParameterKey\": \"SubnetBits\", 7 \"ParameterValue\": \"12\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: ClusterName: Type: String Description: ClusterName used to prefix resource names VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: ClusterName: default: \"\" AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-vpc\" ] ] - Key: !Join [ \"\", [ \"kubernetes.io/cluster/unmanaged\" ] ] Value: \"shared\" PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-public-1\" ] ] PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-public-2\" ] ] PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-public-3\" ] ] InternetGateway: Type: \"AWS::EC2::InternetGateway\" Properties: Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-igw\" ] ] GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-public\" ] ] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-private-1\" ] ] PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-private-1\" ] ] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-natgw-private-1\" ] ] EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-private-2\" ] ] PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-private-2\" ] ] PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-natgw-private-2\" ] ] EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-eip-private-2\" ] ] Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-private-3\" ] ] PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-rtb-private-3\" ] ] PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-natgw-private-3\" ] ] EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Tags: - Key: Name Value: !Join [ \"\", [ !Ref ClusterName, \"-eip-private-3\" ] ] Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableId: Description: Private Route table ID Value: !Ref PrivateRouteTable",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"VpcId\", 3 \"ParameterValue\": \"vpc-<random_string>\" 4 }, { \"ParameterKey\": \"PublicRouteTableId\", 5 \"ParameterValue\": \"<vpc_rtb_pub>\" 6 }, { \"ParameterKey\": \"LocalZoneName\", 7 \"ParameterValue\": \"<cluster_region_name>-<location_identifier>-<zone_identifier>\" 8 }, { \"ParameterKey\": \"LocalZoneNameShort\", 9 \"ParameterValue\": \"<lz_zone_shortname>\" 10 }, { \"ParameterKey\": \"PublicSubnetCidr\", 11 \"ParameterValue\": \"10.0.128.0/20\" 12 } ]",
"aws cloudformation create-stack --stack-name <subnet_stack_name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-lz-nyc1/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <subnet_stack_name>",
"CloudFormation template used to create Local Zone subnets and dependencies AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: ClusterName: Description: ClusterName used to prefix resource names Type: String VpcId: Description: VPC Id Type: String LocalZoneName: Description: Local Zone Name (Example us-east-1-bos-1) Type: String LocalZoneNameShort: Description: Short name for Local Zone used on tag Name (Example bos1) Type: String PublicRouteTableId: Description: Public Route Table ID to associate the Local Zone subnet Type: String PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for Public Subnet Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref LocalZoneName Tags: - Key: Name Value: !Join - \"\" - [ !Ref ClusterName, \"-public-\", !Ref LocalZoneNameShort, \"-1\" ] - Key: kubernetes.io/cluster/unmanaged Value: \"true\" PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId Outputs: PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \"\", [!Ref PublicSubnet] ]",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: aws: subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3",
"./openshift-install create manifests --dir <installation_directory> 1",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200 EOF",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250 EOF",
"export LZ_ZONE_NAME=\"<local_zone_name>\" 1",
"aws ec2 describe-instance-type-offerings --location-type availability-zone --filters Name=location,Values=USD{LZ_ZONE_NAME} --region <region> 1",
"export INSTANCE_TYPE=\"<instance_type>\" 1",
"export AMI_ID=USD(grep ami <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml | tail -n1 | awk '{printUSD2}')",
"export SUBNET_ID=USD(aws cloudformation describe-stacks --stack-name \"<subnet_stack_name>\" \\ 1 | jq -r '.Stacks[0].Outputs[0].OutputValue')",
"export CLUSTER_ID=\"USD(awk '/infrastructureName: / {print USD2}' <installation_directory>/manifests/cluster-infrastructure-02-config.yml)\"",
"cat <<EOF > <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} name: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} machine.openshift.io/cluster-api-machineset: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} template: metadata: labels: machine.openshift.io/cluster-api-cluster: USD{CLUSTER_ID} machine.openshift.io/cluster-api-machine-role: edge machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: USD{CLUSTER_ID}-edge-USD{LZ_ZONE_NAME} spec: metadata: labels: machine.openshift.com/zone-type: local-zone machine.openshift.com/zone-group: USD{ZONE_GROUP_NAME} node-role.kubernetes.io/edge: \"\" taints: - key: node-role.kubernetes.io/edge effect: NoSchedule providerSpec: value: ami: id: USD{AMI_ID} apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: USD{CLUSTER_ID}-worker-profile instanceType: USD{INSTANCE_TYPE} kind: AWSMachineProviderConfig placement: availabilityZone: USD{LZ_ZONE_NAME} region: USD{CLUSTER_REGION} securityGroups: - filters: - name: tag:Name values: - USD{CLUSTER_ID}-worker-sg subnet: id: USD{SUBNET_ID} publicIp: true tags: - name: kubernetes.io/cluster/USD{CLUSTER_ID} value: owned userDataSecret: name: worker-user-data EOF",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-localzone |
21.2.3. Using TCP | 21.2.3. Using TCP The default transport protocol for NFSv4 is TCP; however, the Red Hat Enterprise Linux 4 kernel includes support for NFS over UDP. To use NFS over UDP, include the -o udp option to mount when mounting the NFS-exported file system on the client system. There are three ways to configure an NFS file system export. On demand via the command line (client side), automatically via the /etc/fstab file (client side), and automatically via autofs configuration files, such as /etc/auto.master and /etc/auto.misc (server side with NIS). For example, on demand via the command line (client side): When the NFS mount is specified in /etc/fstab (client side): When the NFS mount is specified in an autofs configuration file for a NIS server, available for NIS enabled workstations: Since the default is TCP, if the -o udp option is not specified, the NFS-exported file system is accessed via TCP. The advantages of using TCP include the following: Improved connection durability, thus less NFS stale file handles messages. Performance gain on heavily loaded networks because TCP acknowledges every packet, unlike UDP which only acknowledges completion. TCP has better congestion control than UDP (which has none). On a very congested network, UDP packets are the first packets that are dropped. This means that if NFS is writing data (in 8K chunks) all of that 8K must be retransmitted over UDP. Because of TCP's reliability, only parts of that 8K data are transmitted at a time. Error detection. When a TCP connection breaks (due to the server being unavailable) the client stops sending data and restarts the connection process once the server becomes available. With UDP, since it's connection-less, the client continues to pound the network with data until the server reestablishes a connection. The main disadvantage is that there is a very small performance hit due to the overhead associated with the TCP protocol. | [
"mount -o udp shadowman.example.com:/misc/export /misc/local",
"server:/usr/local/pub /pub nfs rsize=8192,wsize=8192,timeo=14,intr,udp",
"myproject -rw,soft,intr,rsize=8192,wsize=8192,udp penguin.example.net:/proj52"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Mounting_NFS_File_Systems-Using_TCP |
8.80. gvfs | 8.80. gvfs 8.80.1. RHBA-2014:1499 - gvfs bug fix update Updated gvfs packages that fix several bugs are now available for Red Hat Enterprise Linux 6. GVFS is the GNOME desktop's virtual file system layer, which allows users to easily access local and remote data, including through the FTP, SFTP, WebDAV, CIFS, and SMB protocols, among others. GVFS integrates with the GIO (GNOME I/O) abstraction layer. Bug Fixes BZ# 902448 Previously, when several clients using the same home directory located on remote NFS (Network File System) modified the gvfs-metadata database files, a conflict could occur. In addition, GVFS produced heavy traffic on the remote NFS server. With this update, countermeasures for possible conflicts have been put in place and metadata journal files have been relocated to a temporary directory, and GVFS no longer produces heavy traffic on the NFS mount. BZ# 1011835 Prior to this update, GVFS did not pass a mount prefix into the rename operation. Consequently, it was not possible to rename files on the WebDAV shares if mount prefix was specified, and the following message was displayed when attempting to do so: The item could not be renamed. Sorry, could not rename "dir1" to "dir2": Thespecified location is not mounted This bug has been fixed and the mount prefix is now passed into the rename operation as expected. As a result, the rename operation works correctly on the WebDAV shares. BZ# 1049232 When the GDesktopAppInfoLookup extension processed a URL scheme that contained invalid characters, for example from Thunderbird messages, a request for URL handlers was unsuccessful. Consequently, an error dialog notifying about the invalid character was shown. With this update, GDesktopAppInfoLookup has been modified to check the URL scheme for invalid characters before it is used. As a result, the aforementioned error no longer occurs. BZ# 883021 A GLib2 rebase BZ# 1118325 Previously, GVFS used the select() function to communicate with the OpenSSH utility. Due to changes introduced with the OpenSSH update, select() could return incomplete results. Consequently, mounting of SFTP locations failed with the following message: Error mounting location: Error reading from unix: Input/output error GVFS has been updated to use the poll() function instead of select(), thus fixing this bug. BZ# 1101389 A GLib2 rebase caused a namespace conflict between GVFS and GIO. As a consequence, GVFS failed to build. To fix this bug, affected modules have been renamed, and the building process of GVFS now succeeds. (BZ#1071374) marked the GDesktopAppInfo class as deprecated. Consequently, GVFS failed to compile. With this update, GdesktopAppInfo is not regarded as deprecated in this specific scenario. As a result, GVFS compiles as expected. (BZ#1118704) Users of gvfs are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/gvfs |
Chapter 1. Developing clients overview | Chapter 1. Developing clients overview Develop Kafka client applications for your AMQ Streams installation that can produce messages, consume messages, or do both. You can develop client applications for use with AMQ Streams on OpenShift or AMQ Streams on RHEL. Messages comprise an optional key and a value that contains the message data, plus headers and related metadata. The key identifies the subject of the message, or a property of the message. You must use the same key if you need to process a group of messages in the same order as they are sent. Messages are delivered in batches. Messages contain headers and metadata that provide details that are useful for filtering and routing by clients, such as the timestamp and offset position for the message. Kafka provides client APIs for developing client applications. Kafka producer and consumer APIs are the primary means of interacting with a Kafka cluster in a client application. The APIs control the flow of messages. The producer API sends messages to Kafka topics, while the consumer API reads messages from topics. AMQ Streams supports clients written in Java. How you develop your clients depends on your specific use case. Data durability might be a priority or high throughput. These demands can be met through configuration of your clients and brokers. All clients, however, must be able to connect to all brokers in a given Kafka cluster. 1.1. Supporting a HTTP client As an alternative to using the Kafka producer and consumer APIs in your client, you can set up and use the AMQ Streams Kafka Bridge. The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to Strimzi, without the need for client applications that need to interpret the Kafka protocol. Kafka uses a binary protocol over TCP. For more information, see Using the AMQ Streams Kafka Bridge . 1.2. Tuning your producers and consumers You can add more configuration properties to optimize the performance of your Kafka clients. You probably want to do this when you've had some time to analyze how your client and broker configuration performs. For more information, see Kafka configuration tuning . 1.3. Monitoring client interaction Distributed tracing facilitates the end-to-end tracking of messages. You can enable tracing in Kafka consumer and producer client applications. For more information, see the documentation for distributed tracing in the following guides: Deploying and Upgrading AMQ Streams on OpenShift Using AMQ Streams on RHEL Note When we use the term client application, we're specifically referring to applications that use Kafka producers and consumers to send and receive messages to and from a Kafka cluster. We are not referring to other Kafka components, such as Kafka Connect or Kafka Streams, which have their own distinct use cases and functionality. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/developing_kafka_client_applications/con-client-dev-intro-str |
Chapter 6. Installing and configuring the Tekton plugin | Chapter 6. Installing and configuring the Tekton plugin You can use the Tekton plugin to visualize the results of CI/CD pipeline runs on your Kubernetes or OpenShift clusters. The plugin allows users to visually see high level status of all associated tasks in the pipeline for their applications. 6.1. Installation Prerequisites You have installed and configured the @backstage/plugin-kubernetes and @backstage/plugin-kubernetes-backend dynamic plugins. You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount . The ClusterRole must be granted for custom resources (PipelineRuns and TaskRuns) to the ServiceAccount accessing the cluster. Note If you have the RHDH Kubernetes plugin configured, then the ClusterRole is already granted. To view the pod logs, you have granted permissions for pods/log . You can use the following code to grant the ClusterRole for custom resources and pod logs: kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - "" resources: - pods/log verbs: - get - list - watch ... - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list You can use the prepared manifest for a read-only ClusterRole , which provides access for both Kubernetes plugin and Tekton plugin. Add the following annotation to the entity's catalog-info.yaml file to identify whether an entity contains the Kubernetes resources: annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> You can also add the backstage.io/kubernetes-namespace annotation to identify the Kubernetes resources using the defined namespace. annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS> Add the following annotation to the catalog-info.yaml file of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity: annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME> Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations. annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end' Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity: labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> Note When you use the label selector, the mentioned labels must be present on the resource. Procedure The Tekton plugin is pre-loaded in RHDH with basic configuration properties. To enable it, set the disabled property to false as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-tekton disabled: false | [
"kubernetes: customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - \"\" resources: - pods/log verbs: - get - list - watch - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list",
"annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>",
"annotations: janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'",
"labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-tekton disabled: false"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring_dynamic_plugins/installation-and-configuration-tekton |
Chapter 5. Profiles | Chapter 5. Profiles There are features in Red Hat Single Sign-On that are not enabled by default, these include features that are not fully supported. In addition there are some features that are enabled by default, but that can be disabled. The features that can be enabled and disabled are: Name Description Enabled by default Support level account2 New Account Management Console Yes Supported account_api Account Management REST API Yes Supported admin_fine_grained_authz Fine-Grained Admin Permissions No Preview ciba OpenID Connect Client Initiated Backchannel Authentication (CIBA) Yes Supported client_policies Add client configuration policies Yes Supported client_secret_rotation Enables client secret rotation for confidential clients Yes Preview par OAuth 2.0 Pushed Authorization Requests (PAR) Yes Supported declarative_user_profile Configure user profiles using a declarative style No Preview docker Docker Registry protocol No Supported impersonation Ability for admins to impersonate users Yes Supported openshift_integration Extension to enable securing OpenShift No Preview recovery_codes Recovery codes for authentication No Preview scripts Write custom authenticators using JavaScript No Preview step_up_authentication Step-up authentication Yes Supported token_exchange Token Exchange Service No Preview upload_scripts Upload scripts No Deprecated web_authn W3C Web Authentication (WebAuthn) Yes Supported update_email Update Email Workflow No Preview To enable all preview features start the server with: You can set this permanently by creating the file standalone/configuration/profile.properties (or domain/servers/server-one/configuration/profile.properties for server-one in domain mode). Add the following to the file: To enable a specific feature start the server with: For example to enable Docker use -Dkeycloak.profile.feature.docker=enabled . You can set this permanently in the profile.properties file by adding: To disable a specific feature start the server with: For example to disable Impersonation use -Dkeycloak.profile.feature.impersonation=disabled . You can set this permanently in the profile.properties file by adding: | [
"bin/standalone.sh|bat -Dkeycloak.profile=preview",
"profile=preview",
"bin/standalone.sh|bat -Dkeycloak.profile.feature.<feature name>=enabled",
"feature.docker=enabled",
"bin/standalone.sh|bat -Dkeycloak.profile.feature.<feature name>=disabled",
"feature.impersonation=disabled"
]
| https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_installation_and_configuration_guide/profiles |
Chapter 5. MTR 1.2.4 | Chapter 5. MTR 1.2.4 5.1. New features This section describes the new features of the Migration Toolkit for Runtimes (MTR) 1.2.4: New rules support the migration of Red Hat JBoss Enterprise Application Platform (EAP 7) to EAP 8. New rules support the migration of Jakarta EE applications to Quarkus. 5.2. Known issues For a complete list of all known issues, see the list of MTR 1.2.4 known issues in Jira. 5.3. Resolved issues CVE-2023-26159: follow-redirects package before 1.15.4 are vulnerable to Improper Input Validation Versions of the follow-redirects package before 1.15.4 are vulnerable to Improper Input Validation. This vulnerability is due to the improper handling of URLs by the url.parse() function. When a new URL returns an error, it can be manipulated to misinterpret the hostname. An attacker could exploit this weakness to redirect traffic to a malicious site, potentially leading to information disclosure, phishing attacks, or other security breaches. For more details, see (CVE-2023-26159) . CVE-2022-25883: Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the node-semver package Versions of the semver npm package before 7.5.2 are vulnerable to Regular Expression Denial of Service (ReDoS). This ReDoS vulnerability comes from the new Range function, when untrusted user data is provided as a range. For more details, see (CVE-2022-25883) . CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution Versions of the tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution. This vulnerability is due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. This issue arises from the manner in which the objects are initialized. For more details, see (CVE-2023-26136) . CVE-2023-35116: jackson-databind before 2.15.2 are vulnerable to Denial of Service or other unspecified impact Versions of the jackson-databind library before 2.15.2 are vulnerable to Denial of Service (DoS) attacks or other unspecified impacts using a crafted object that uses cyclic dependencies. For more details, see (CVE-2023-35116) . For a complete list of all issues resolved in this release, see the list of MTR 1.2.4 resolved issues in Jira. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/release_notes/mtr_1_2_4 |
2.4. AMD Hawaii GPU Support | 2.4. AMD Hawaii GPU Support Red Hat Enterprise Linux 7.1 enables support for hardware acceleration on AMD graphics cards using the Hawaii core (AMD Radeon R9 290 and AMD Radeon R9 290X). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/sect-hardware_enablement-amd_hawaii |
Chapter 7. AWS Simple Notification System (SNS) | Chapter 7. AWS Simple Notification System (SNS) Only producer is supported The AWS2 SNS component allows messages to be sent to an Amazon Simple Notification Topic. The implementation of the Amazon API is provided by the AWS SDK . Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon SNS. More information is available at Amazon SNS . 7.1. URI Format The topic will be created if they don't already exists. You can append query options to the URI in the following format, ?options=value&option2=value&... 7.2. URI Options 7.2.1. Configuring Options Camel components are configured on two separate levels: component level endpoint level 7.2.1.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 7.2.1.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 7.3. Component Options The AWS Simple Notification System (SNS) component supports 24 options, which are listed below. Name Description Default Type amazonSNSClient (producer) Autowired To use the AmazonSNS as the client. SnsClient autoCreateTopic (producer) Setting the autocreation of the topic. false boolean configuration (producer) Component configuration. Sns2Configuration kmsMasterKeyId (producer) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageDeduplicationIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values: useExchangeId useContentBasedDeduplication useExchangeId String messageGroupIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values: useConstant useExchangeId usePropertyValue String messageStructure (producer) The message structure to use such as json. String overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean policy (producer) The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String proxyHost (producer) To define a proxy host when instantiating the SNS client. String proxyPort (producer) To define a proxy port when instantiating the SNS client. Integer proxyProtocol (producer) To define a proxy protocol when instantiating the SNS client. Enum values: HTTP HTTPS HTTPS Protocol queueUrl (producer) The queueUrl to subscribe to. String region (producer) The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String serverSideEncryptionEnabled (producer) Define if Server Side Encryption is enabled or not on the topic. false boolean subject (producer) The subject which is used if the message header 'CamelAwsSnsSubject' is not present. String subscribeSNStoSQS (producer) Define if the subscription between SNS Topic and SQS must be done or not. false boolean trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 7.4. Endpoint Options The AWS Simple Notification System (SNS) endpoint is configured using URI syntax: with the following path and query parameters: 7.4.1. Path Parameters (1 parameters) Name Description Default Type topicNameOrArn (producer) Required Topic name or ARN. String 7.4.2. Query Parameters (23 parameters) Name Description Default Type amazonSNSClient (producer) Autowired To use the AmazonSNS as the client. SnsClient autoCreateTopic (producer) Setting the autocreation of the topic. false boolean headerFilterStrategy (producer) To use a custom HeaderFilterStrategy to map headers to/from Camel. HeaderFilterStrategy kmsMasterKeyId (producer) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageDeduplicationIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values: useExchangeId useContentBasedDeduplication useExchangeId String messageGroupIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values: useConstant useExchangeId usePropertyValue String messageStructure (producer) The message structure to use such as json. String overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean policy (producer) The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String proxyHost (producer) To define a proxy host when instantiating the SNS client. String proxyPort (producer) To define a proxy port when instantiating the SNS client. Integer proxyProtocol (producer) To define a proxy protocol when instantiating the SNS client. Enum values: HTTP HTTPS HTTPS Protocol queueUrl (producer) The queueUrl to subscribe to. String region (producer) The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String serverSideEncryptionEnabled (producer) Define if Server Side Encryption is enabled or not on the topic. false boolean subject (producer) The subject which is used if the message header 'CamelAwsSnsSubject' is not present. String subscribeSNStoSQS (producer) Define if the subscription between SNS Topic and SQS must be done or not. false boolean trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required SNS component options You have to provide the amazonSNSClient in the Registry or your accessKey and secretKey to access the Amazon's SNS . 7.5. Usage 7.5.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation . 7.5.2. Message headers evaluated by the SNS producer Header Type Description CamelAwsSnsSubject String The Amazon SNS message subject. If not set, the subject from the SnsConfiguration is used. 7.5.3. Message headers set by the SNS producer Header Type Description CamelAwsSnsMessageId String The Amazon SNS message ID. 7.5.4. Advanced AmazonSNS configuration If you need more control over the SnsClient instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws2-sns://MyTopic?amazonSNSClient=#client"); The #client refers to a AmazonSNS in the Registry. 7.5.5. Create a subscription between an AWS SNS Topic and an AWS SQS Queue You can create a subscription of an SQS Queue to an SNS Topic in this way: from("direct:start") .to("aws2-sns://test-camel-sns1?amazonSNSClient=#amazonSNSClient&subscribeSNStoSQS=true&queueUrl=https://sqs.eu-central-1.amazonaws.com/780410022472/test-camel"); The #amazonSNSClient refers to a SnsClient in the Registry. By specifying subscribeSNStoSQS to true and a queueUrl of an existing SQS Queue, you'll be able to subscribe your SQS Queue to your SNS Topic. At this point you can consume messages coming from SNS Topic through your SQS Queue from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5") .to(...); 7.6. Topic Autocreation With the option autoCreateTopic users are able to avoid the autocreation of an SNS Topic in case it doesn't exist. The default for this option is true . If set to false any operation on a not-existent topic in AWS won't be successful and an error will be returned. 7.7. SNS FIFO SNS FIFO are supported. While creating the SQS queue you will subscribe to the SNS topic there is an important point to remember, you'll need to make possible for the SNS Topic to send message to the SQS Queue. Example Suppose you created an SNS FIFO Topic called Order.fifo and an SQS Queue called QueueSub.fifo . In the access Policy of the QueueSub.fifo you should submit something like this: { "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__owner_statement", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::780560123482:root" }, "Action": "SQS:*", "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo" }, { "Effect": "Allow", "Principal": { "Service": "sns.amazonaws.com" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:sns:eu-west-1:780410022472:Order.fifo" } } } ] } This is a critical step to make the subscription work correctly. 7.7.1. SNS Fifo Topic Message group Id Strategy and message Deduplication Id Strategy When sending something to the FIFO topic you'll need to always set up a message group Id strategy. If the content-based message deduplication has been enabled on the SNS Fifo topic, where won't be the need of setting a message deduplication id strategy, otherwise you'll have to set it. 7.8. Examples 7.8.1. Producer Examples Sending to a topic from("direct:start") .to("aws2-sns://camel-topic?subject=The+subject+message&autoCreateTopic=true"); 7.9. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sns</artifactId> <version>USD{camel-version}</version> </dependency> where 3.18.3 must be replaced by the actual version of Camel. 7.10. Spring Boot Auto-Configuration When using aws2-sns with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sns-starter</artifactId> </dependency> The component supports 25 options, which are listed below. Name Description Default Type camel.component.aws2-sns.access-key Amazon AWS Access Key. String camel.component.aws2-sns.amazon-s-n-s-client To use the AmazonSNS as the client. The option is a software.amazon.awssdk.services.sns.SnsClient type. SnsClient camel.component.aws2-sns.auto-create-topic Setting the autocreation of the topic. false Boolean camel.component.aws2-sns.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-sns.configuration Component configuration. The option is a org.apache.camel.component.aws2.sns.Sns2Configuration type. Sns2Configuration camel.component.aws2-sns.enabled Whether to enable auto configuration of the aws2-sns component. This is enabled by default. Boolean camel.component.aws2-sns.kms-master-key-id The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. String camel.component.aws2-sns.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-sns.message-deduplication-id-strategy Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. useExchangeId String camel.component.aws2-sns.message-group-id-strategy Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. String camel.component.aws2-sns.message-structure The message structure to use such as json. String camel.component.aws2-sns.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-sns.policy The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.aws2-sns.proxy-host To define a proxy host when instantiating the SNS client. String camel.component.aws2-sns.proxy-port To define a proxy port when instantiating the SNS client. Integer camel.component.aws2-sns.proxy-protocol To define a proxy protocol when instantiating the SNS client. Protocol camel.component.aws2-sns.queue-url The queueUrl to subscribe to. String camel.component.aws2-sns.region The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-sns.secret-key Amazon AWS Secret Key. String camel.component.aws2-sns.server-side-encryption-enabled Define if Server Side Encryption is enabled or not on the topic. false Boolean camel.component.aws2-sns.subject The subject which is used if the message header 'CamelAwsSnsSubject' is not present. String camel.component.aws2-sns.subscribe-s-n-sto-s-q-s Define if the subscription between SNS Topic and SQS must be done or not. false Boolean camel.component.aws2-sns.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-sns.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-sns.use-default-credentials-provider Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false Boolean | [
"aws2-sns://topicNameOrArn[?options]",
"aws2-sns:topicNameOrArn",
"from(\"direct:start\") .to(\"aws2-sns://MyTopic?amazonSNSClient=#client\");",
"from(\"direct:start\") .to(\"aws2-sns://test-camel-sns1?amazonSNSClient=#amazonSNSClient&subscribeSNStoSQS=true&queueUrl=https://sqs.eu-central-1.amazonaws.com/780410022472/test-camel\");",
"from(\"aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5\") .to(...);",
"{ \"Version\": \"2008-10-17\", \"Id\": \"__default_policy_ID\", \"Statement\": [ { \"Sid\": \"__owner_statement\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::780560123482:root\" }, \"Action\": \"SQS:*\", \"Resource\": \"arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo\" }, { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"sns.amazonaws.com\" }, \"Action\": \"SQS:SendMessage\", \"Resource\": \"arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo\", \"Condition\": { \"ArnLike\": { \"aws:SourceArn\": \"arn:aws:sns:eu-west-1:780410022472:Order.fifo\" } } } ] }",
"from(\"direct:start\") .to(\"aws2-sns://camel-topic?subject=The+subject+message&autoCreateTopic=true\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sns</artifactId> <version>USD{camel-version}</version> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sns-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-aws2-sns-component-starter |
Chapter 6. Cluster Operators reference | Chapter 6. Cluster Operators reference This reference guide indexes the cluster Operators shipped by Red Hat that serve as the architectural foundation for OpenShift Container Platform. Cluster Operators are installed by default, unless otherwise noted, and are managed by the Cluster Version Operator (CVO). For more details on the control plane architecture, see Operators in OpenShift Container Platform . Cluster administrators can view cluster Operators in the OpenShift Container Platform web console from the Administration Cluster Settings page. Note Cluster Operators are not managed by Operator Lifecycle Manager (OLM) and OperatorHub. OLM and OperatorHub are part of the Operator Framework used in OpenShift Container Platform for installing and running optional add-on Operators . Some of the following cluster Operators can be disabled prior to installation. For more information see cluster capabilities . 6.1. Cluster Baremetal Operator Note The Cluster Baremetal Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. Project cluster-baremetal-operator Additional resources Bare-metal capability 6.2. Bare Metal Event Relay Purpose The OpenShift Bare Metal Event Relay manages the life-cycle of the Bare Metal Event Relay. The Bare Metal Event Relay enables you to configure the types of cluster event that are monitored using Redfish hardware events. Configuration objects You can use this command to edit the configuration after installation: for example, the webhook port. You can edit configuration objects with: USD oc -n [namespace] edit cm hw-event-proxy-operator-manager-config apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org Project hw-event-proxy-operator CRD The proxy enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure, reported using the HardwareEvent CR. hardwareevents.event.redhat-cne.org : Scope: Namespaced CR: HardwareEvent Validation: Yes 6.3. Cloud Credential Operator Purpose The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Project openshift-cloud-credential-operator CRDs credentialsrequests.cloudcredential.openshift.io Scope: Namespaced CR: CredentialsRequest Validation: Yes Configuration objects No configuration required. Additional resources About the Cloud Credential Operator CredentialsRequest custom resource 6.4. Cluster Authentication Operator Purpose The Cluster Authentication Operator installs and maintains the Authentication custom resource in a cluster and can be viewed with: USD oc get clusteroperator authentication -o yaml Project cluster-authentication-operator 6.5. Cluster Autoscaler Operator Purpose The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider. Project cluster-autoscaler-operator CRDs ClusterAutoscaler : This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to the ClusterAutoscaler resource named default in the managed namespace, the value of the WATCH_NAMESPACE environment variable. MachineAutoscaler : This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, the min and max size. Currently only MachineSet objects can be targeted. 6.6. Cluster Cloud Controller Manager Operator Purpose Note This Operator is fully supported for Microsoft Azure Stack Hub and IBM Cloud. It is available as a Technology Preview for Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Red Hat OpenStack Platform (RHOSP), and VMware vSphere. The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. It is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Cloud configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-cloud-controller-manager-operator 6.7. Cluster CAPI Operator Note This Operator is available as a Technology Preview for Amazon Web Services (AWS) and Google Cloud Platform (GCP). Purpose The Cluster CAPI Operator maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. Project cluster-capi-operator CRDs awsmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachine Validation: No gcpmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachine Validation: No awsmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachinetemplate Validation: No gcpmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachinetemplate Validation: No 6.8. Cluster Config Operator Purpose The Cluster Config Operator performs the following tasks related to config.openshift.io : Creates CRDs. Renders the initial custom resources. Handles migrations. Project cluster-config-operator 6.9. Cluster CSI Snapshot Controller Operator Note The Cluster CSI Snapshot Controller Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Project cluster-csi-snapshot-controller-operator Additional resources CSI snapshot controller capability 6.10. Cluster Image Registry Operator Purpose The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. Project cluster-image-registry-operator 6.11. Cluster Machine Approver Operator Purpose The Cluster Machine Approver Operator automatically approves the CSRs requested for a new worker node after cluster installation. Note For the control plane node, the approve-csr service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase. Project cluster-machine-approver-operator 6.12. Cluster Monitoring Operator Purpose The Cluster Monitoring Operator (CMO) manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform. Project openshift-monitoring CRDs alertmanagers.monitoring.coreos.com Scope: Namespaced CR: alertmanager Validation: Yes prometheuses.monitoring.coreos.com Scope: Namespaced CR: prometheus Validation: Yes prometheusrules.monitoring.coreos.com Scope: Namespaced CR: prometheusrule Validation: Yes servicemonitors.monitoring.coreos.com Scope: Namespaced CR: servicemonitor Validation: Yes Configuration objects USD oc -n openshift-monitoring edit cm cluster-monitoring-config 6.13. Cluster Network Operator Purpose The Cluster Network Operator installs and upgrades the networking components on an OpenShift Container Platform cluster. 6.14. Cluster Samples Operator Note The Cluster Samples Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the image stream import logic in the OpenShift image registry and API server to authenticate with registry.redhat.io . An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample image streams. If created, those secrets contain the content of a config.json for docker needed to facilitate image import. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches the compatible release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the image streams directly. The samples resource includes a finalizer, which cleans up the following upon its deletion: Operator-managed image streams Operator-managed templates Operator-generated configuration resources Cluster status resources Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. Project cluster-samples-operator Additional resources OpenShift samples capability 6.15. Cluster Storage Operator Note The Cluster Storage Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Project cluster-storage-operator Configuration No configuration is required. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. Additional resources Storage capability 6.16. Cluster Version Operator Purpose Cluster Operators manage specific areas of cluster functionality. The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. The CVO also checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph by collecting the status of both the cluster version and its cluster Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. For more information regarding cluster version condition types, see "Understanding cluster version condition types". Project cluster-version-operator Additional resources Understanding cluster version condition types 6.17. Console Operator Note The Console Operator is an optional cluster capability that can be disabled by cluster administrators during installation. If you disable the Console Operator at installation, your cluster is still supported and upgradable. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Project console-operator Additional resources Web console capability 6.18. Control Plane Machine Set Operator Note This Operator is available for Amazon Web Services (AWS), Microsoft Azure, and VMware vSphere. Purpose The Control Plane Machine Set Operator automates the management of control plane machine resources within an OpenShift Container Platform cluster. Project cluster-control-plane-machine-set-operator CRDs controlplanemachineset.machine.openshift.io Scope: Namespaced CR: ControlPlaneMachineSet Validation: Yes Additional resources About control plane machine sets ControlPlaneMachineSet custom resource 6.19. DNS Operator Purpose The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. The Operator creates a working default deployment based on the cluster's configuration. The default cluster domain is cluster.local . Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported. The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster. Project cluster-dns-operator 6.20. etcd cluster Operator Purpose The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures. Project cluster-etcd-operator CRDs etcds.operator.openshift.io Scope: Cluster CR: etcd Validation: Yes Configuration objects USD oc edit etcd cluster 6.21. Ingress Operator Purpose The Ingress Operator configures and manages the OpenShift Container Platform router. Project openshift-ingress-operator CRDs clusteringresses.ingress.openshift.io Scope: Namespaced CR: clusteringresses Validation: No Configuration objects Cluster config Type Name: clusteringresses.ingress.openshift.io Instance Name: default View Command: USD oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml Notes The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router: USD oc get deployment -n openshift-ingress The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed Ingress Controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr , then the Ingress Controller operates in IPv6-only mode. In the following example, Ingress Controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr : USD oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' Example output map[cidr:10.128.0.0/14 hostPrefix:23] 6.22. Insights Operator Note The Insights Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Project insights-operator Configuration No configuration is required. Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Insights capability See About remote health monitoring for details about Insights Operator and Telemetry. 6.23. Kubernetes API Server Operator Purpose The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed using the Cluster Version Operator (CVO). Project openshift-kube-apiserver-operator CRDs kubeapiservers.operator.openshift.io Scope: Cluster CR: kubeapiserver Validation: Yes Configuration objects USD oc edit kubeapiserver 6.24. Kubernetes Controller Manager Operator Purpose The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift Container Platform library-go framework and it is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-controller-manager-operator 6.25. Kubernetes Scheduler Operator Purpose The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO). The Kubernetes Scheduler Operator contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-scheduler-operator Configuration The configuration for the Kubernetes Scheduler is the result of merging: a default configuration. an observed configuration from the spec schedulers.config.openshift.io . All of these are sparse configurations, invalidated JSON snippets which are merged to form a valid configuration at the end. 6.26. Kubernetes Storage Version Migrator Operator Purpose The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests. Project cluster-kube-storage-version-migrator-operator 6.27. Machine API Operator Purpose The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster. Project machine-api-operator CRDs MachineSet Machine MachineHealthCheck 6.28. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources About the OpenShift SDN network plugin . Project openshift-machine-config-operator 6.29. Marketplace Operator Note The Marketplace Operator is an optional cluster capability that can be disabled by cluster administrators if it is not needed. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. Project operator-marketplace Additional resources Marketplace capability 6.30. Node Tuning Operator Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Project cluster-node-tuning-operator Additional resources Low latency tuning of OCP nodes 6.31. OpenShift API Server Operator Purpose The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster. Project openshift-apiserver-operator CRDs openshiftapiservers.operator.openshift.io Scope: Cluster CR: openshiftapiserver Validation: Yes 6.32. OpenShift Controller Manager Operator Purpose The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager custom resource in a cluster and can be viewed with: USD oc get clusteroperator openshift-controller-manager -o yaml The custom resource definitino (CRD) openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with: USD oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml Project cluster-openshift-controller-manager-operator 6.33. Operator Lifecycle Manager Operators Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 6.1. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.12, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. CRDs Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 6.1. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 6.2. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. Additional resources For more information, see the sections on understanding Operator Lifecycle Manager (OLM) . 6.34. OpenShift Service CA Operator Purpose The OpenShift Service CA Operator mints and manages serving certificates for Kubernetes services. Project openshift-service-ca-operator 6.35. vSphere Problem Detector Operator Purpose The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. Note The vSphere Problem Detector Operator is only started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. Configuration No configuration is required. Notes The Operator supports OpenShift Container Platform installations on vSphere. The Operator uses the vsphere-cloud-credentials to communicate with vSphere. The Operator performs checks that are related to storage. Additional resources For more details, see Using the vSphere Problem Detector Operator . | [
"oc -n [namespace] edit cm hw-event-proxy-operator-manager-config",
"apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operators/cluster-operators-ref |
Chapter 23. Additional introspection operations | Chapter 23. Additional introspection operations In some situations, you might want to perform introspection outside of the standard overcloud deployment workflow. For example, you might want to introspect new nodes or refresh introspection data after replacing hardware on existing unused nodes. 23.1. Performing individual node introspection To perform a single introspection on an available node, set the node to management mode and perform the introspection. Procedure Set all nodes to a manageable state: Perform the introspection: After the introspection completes, the node changes to an available state. 23.2. Performing node introspection after initial introspection After an initial introspection, all nodes enter an available state due to the --provide option. To perform introspection on all nodes after the initial introspection, set the node to management mode and perform the introspection. Procedure Set all nodes to a manageable state Run the bulk introspection command: After the introspection completes, all nodes change to an available state. 23.3. Performing network introspection for interface information Network introspection retrieves link layer discovery protocol (LLDP) data from network switches. The following commands show a subset of LLDP information for all interfaces on a node, or full information for a particular node and interface. This can be useful for troubleshooting. Director enables LLDP data collection by default. Procedure To get a list of interfaces on a node, run the following command: For example: To view interface data and switch port information, run the following command: For example: 23.4. Retrieving hardware introspection details The Bare Metal service hardware-inspection-extras feature is enabled by default, and you can use it to retrieve hardware details for overcloud configuration. For more information about the inspection_extras parameter in the undercloud.conf file, see Director configuration parameters . For example, the numa_topology collector is part of the hardware-inspection extras and includes the following information for each NUMA node: RAM (in kilobytes) Physical CPU cores and their sibling threads NICs associated with the NUMA node Procedure To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to complete the following command: The following example shows the retrieved NUMA information for a bare-metal node: | [
"(undercloud) USD openstack baremetal node manage [NODE UUID]",
"(undercloud) USD openstack overcloud node introspect [NODE UUID] --provide",
"(undercloud) USD for node in USD(openstack baremetal node list --fields uuid -f value) ; do openstack baremetal node manage USDnode ; done",
"(undercloud) USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud) USD openstack baremetal introspection interface list [NODE UUID]",
"(undercloud) USD openstack baremetal introspection interface list c89397b7-a326-41a0-907d-79f8b86c7cd9 +-----------+-------------------+------------------------+-------------------+----------------+ | Interface | MAC Address | Switch Port VLAN IDs | Switch Chassis ID | Switch Port ID | +-----------+-------------------+------------------------+-------------------+----------------+ | p2p2 | 00:0a:f7:79:93:19 | [103, 102, 18, 20, 42] | 64:64:9b:31:12:00 | 510 | | p2p1 | 00:0a:f7:79:93:18 | [101] | 64:64:9b:31:12:00 | 507 | | em1 | c8:1f:66:c7:e8:2f | [162] | 08:81:f4:a6:b3:80 | 515 | | em2 | c8:1f:66:c7:e8:30 | [182, 183] | 08:81:f4:a6:b3:80 | 559 | +-----------+-------------------+------------------------+-------------------+----------------+",
"(undercloud) USD openstack baremetal introspection interface show [NODE UUID] [INTERFACE]",
"(undercloud) USD openstack baremetal introspection interface show c89397b7-a326-41a0-907d-79f8b86c7cd9 p2p1 +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | interface | p2p1 | | mac | 00:0a:f7:79:93:18 | | node_ident | c89397b7-a326-41a0-907d-79f8b86c7cd9 | | switch_capabilities_enabled | [u'Bridge', u'Router'] | | switch_capabilities_support | [u'Bridge', u'Router'] | | switch_chassis_id | 64:64:9b:31:12:00 | | switch_port_autonegotiation_enabled | True | | switch_port_autonegotiation_support | True | | switch_port_description | ge-0/0/2.0 | | switch_port_id | 507 | | switch_port_link_aggregation_enabled | False | | switch_port_link_aggregation_id | 0 | | switch_port_link_aggregation_support | True | | switch_port_management_vlan_id | None | | switch_port_mau_type | Unknown | | switch_port_mtu | 1514 | | switch_port_physical_capabilities | [u'1000BASE-T fdx', u'100BASE-TX fdx', u'100BASE-TX hdx', u'10BASE-T fdx', u'10BASE-T hdx', u'Asym and Sym PAUSE fdx'] | | switch_port_protocol_vlan_enabled | None | | switch_port_protocol_vlan_ids | None | | switch_port_protocol_vlan_support | None | | switch_port_untagged_vlan_id | 101 | | switch_port_vlan_ids | [101] | | switch_port_vlans | [{u'name': u'RHOS13-PXE', u'id': 101}] | | switch_protocol_identities | None | | switch_system_name | rhos-compute-node-sw1 | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+",
"openstack baremetal introspection data save <UUID> | jq .numa_topology",
"{ \"cpus\": [ { \"cpu\": 1, \"thread_siblings\": [ 1, 17 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 10, 26 ], \"numa_node\": 1 }, { \"cpu\": 0, \"thread_siblings\": [ 0, 16 ], \"numa_node\": 0 }, { \"cpu\": 5, \"thread_siblings\": [ 13, 29 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 15, 31 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 7, 23 ], \"numa_node\": 0 }, { \"cpu\": 1, \"thread_siblings\": [ 9, 25 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 6, 22 ], \"numa_node\": 0 }, { \"cpu\": 3, \"thread_siblings\": [ 11, 27 ], \"numa_node\": 1 }, { \"cpu\": 5, \"thread_siblings\": [ 5, 21 ], \"numa_node\": 0 }, { \"cpu\": 4, \"thread_siblings\": [ 12, 28 ], \"numa_node\": 1 }, { \"cpu\": 4, \"thread_siblings\": [ 4, 20 ], \"numa_node\": 0 }, { \"cpu\": 0, \"thread_siblings\": [ 8, 24 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 14, 30 ], \"numa_node\": 1 }, { \"cpu\": 3, \"thread_siblings\": [ 3, 19 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 2, 18 ], \"numa_node\": 0 } ], \"ram\": [ { \"size_kb\": 66980172, \"numa_node\": 0 }, { \"size_kb\": 67108864, \"numa_node\": 1 } ], \"nics\": [ { \"name\": \"ens3f1\", \"numa_node\": 1 }, { \"name\": \"ens3f0\", \"numa_node\": 1 }, { \"name\": \"ens2f0\", \"numa_node\": 0 }, { \"name\": \"ens2f1\", \"numa_node\": 0 }, { \"name\": \"ens1f1\", \"numa_node\": 0 }, { \"name\": \"ens1f0\", \"numa_node\": 0 }, { \"name\": \"eno4\", \"numa_node\": 0 }, { \"name\": \"eno1\", \"numa_node\": 0 }, { \"name\": \"eno3\", \"numa_node\": 0 }, { \"name\": \"eno2\", \"numa_node\": 0 } ] }"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_additional-introspection-operations |
Chapter 7. Configuring the Ingress Controller endpoint publishing strategy | Chapter 7. Configuring the Ingress Controller endpoint publishing strategy 7.1. Ingress Controller endpoint publishing strategy NodePortService endpoint publishing strategy The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service. In this configuration, the Ingress Controller deployment uses container networking. A NodePortService is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService are preserved. Figure 7.1. Diagram of NodePortService The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy: All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes. When the client connects to a node that is down, for example, by connecting the 10.0.128.4 IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4 address is down and another IP address must be used instead. Note The Ingress Operator ignores any updates to .spec.ports[].nodePort fields of the service. By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. For more information, see the Kubernetes Services documentation on NodePort . HostNetwork endpoint publishing strategy The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed. An Ingress Controller with the HostNetwork endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports. 7.1.1. Configuring the Ingress Controller endpoint publishing scope to Internal When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . Cluster administrators can change an External scoped Ingress Controller to Internal . Prerequisites You installed the oc CLI. Procedure To change an External scoped Ingress Controller to Internal , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as Internal . 7.1.2. Configuring the Ingress Controller endpoint publishing scope to External When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . The Ingress Controller's scope can be configured to be Internal during installation or after, and cluster administrators can change an Internal Ingress Controller to External . Important On some platforms, it is necessary to delete and recreate the service. Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OpenShift Container Platform to deprovision the existing service load balancer, provision a new one, and update DNS. Prerequisites You installed the oc CLI. Procedure To change an Internal scoped Ingress Controller to External , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as External . 7.2. Additional resources For more information, see Ingress Controller configuration parameters . | [
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"External\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/nw-ingress-controller-endpoint-publishing-strategies |
Installing and deploying Service Registry on OpenShift | Installing and deploying Service Registry on OpenShift Red Hat Integration 2023.q4 Install, deploy, and configure Service Registry 2.5 Red Hat Integration Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_and_deploying_service_registry_on_openshift/index |
Chapter 17. Creating and using deployment collections | Chapter 17. Creating and using deployment collections You can use collections in RHACS to define and name a group of resources by using matching patterns. You can then configure system processes to use these collections. Currently, collections are available only under the following conditions: Collections are available only for deployments. You can only use collections with vulnerability reporting. See "Vulnerability reporting" in the Additional resources section for more information. Deployment collections are only available to RHACS customers if they are using the PostgreSQL database. Note By default, RHACS Cloud Service uses the PostgreSQL database, and it is also used by default when installing RHACS release 4.0 and later. RHACS customers using an earlier release than 3.74 can migrate to the PostgreSQL database with help from Red Hat. 17.1. Prerequisites A user account must have the following permissions to use the Collections feature: WorkflowAdministration : You must have Read access to view collections and Write access to add, change, or delete collections. Deployment : You need Read Access or Read and Write Access to understand how configured rules will match with deployments. These permissions are included in the Admin system role. For more information about roles and permissions, see "Managing RBAC in RHACS" in "Additional resources". 17.2. Understanding deployment collections Deployment collections are only available to RHACS customers using the PostgreSQL database. By default, RHACS Cloud Service uses the PostgreSQL database, and it is also used by default when installing RHACS release 4.0 and later. RHACS customers using an earlier release than 3.74 can migrate to the PostgreSQL database with help from Red Hat. An RHACS collection is a user-defined, named reference. It defines a logical grouping by using selection rules. Those rules can match a deployment, namespace, or cluster name or label. You can specify rules by using exact matches or regular expressions. Collections are resolved at run time and can refer to objects that do not exist at the time of the collection definition. Collections can be constructed by using other collections to describe complex hierarchies. Collections provide you with a language to describe how your dynamic infrastructure is organized, eliminating the need for cloning and repetitive editing of RHACS properties such as inclusion and exclusion scopes. You can use collections to identify any group of deployments in your system, such as: An infrastructure area that is owned by a specific development team An application that requires different policy exceptions when running in a development or in a production cluster A distributed application that spans multiple namespaces, defined with a common deployment label An entire production or test environment Collections can be created and managed by using the RHACS portal. The collection editor helps you apply selection rules at the deployment, namespace, and cluster level. You can use simple and complex rules, including regular expression. You can define a collection by selecting one or more deployments, namespaces, or clusters, as shown in the following image. This image shows a collection that contains deployments with the name reporting or that contain db in the name. The collection includes deployments matching those names in the namespace with a specific label of kubernetes.io/metadata.name=medical , and in clusters named production . The collection editor also helps you to describe complex hierarchies by attaching, or nesting, other collections. The editor provides a real-time preview side panel that helps you understand the rules you are applying by showing the resulting matches to the rules that you have configured. The following image provides an example of results from a collection named "Sensitive User Data" with a set of collection rules (not shown). The "Sensitive User Data" collection has two attached collections, "Credit card processors" and "Medical records" and each of those collections have their own collection rules. The results shown in the side panel include items that match the rules configured for all three collections. 17.3. Accessing deployment collections To use collections, click Platform Configuration Collections . The page displays a list of currently-configured collections. You can perform the following actions: Search for collections by entering text in the Search by name field, and then press -> . Click on a collection in the collection list to view the collection in read-only mode. Click on for an existing collection to edit, clone, or delete it. Note You cannot delete a collection that is actively used in RHACS. Click Create collection to create a new deployment collection. 17.4. Creating deployment collections When creating a collection, you must name it and define the rules for the collection. Procedure In the Collections page, click Create collection . Enter the name and description for the collection. In the Collection rules section, you must perform at least one of the following actions: Define the rules for the collection: See the "Creating collection rules" section for more information. Attach existing collections to the collection: See the "Adding attached collections" section for more information. The results of your rule configuration or choosing attached collections are available in the Collection results live preview panel. Click Hide results to remove this panel from display. Click Save . 17.4.1. Creating collection rules When creating collections, you must configure at least one rule or attach another collection to the new collection that you are creating. Note Currently, collections are available only for deployments. Configure rules to select the resources to include in the collection. Use the preview panel to see the results of the collection rules as you configure them. You can configure rules in any order. Procedure In the Deployments section, select one of the following options from the drop-down list: All deployments : Includes all deployments in the collection. If you select this option, you must filter the collection by using namespaces or clusters or by attaching another collection. Deployments with names matching Click this option to select by name and then click one of the following options: Select An exact value of and enter the exact name of the deployment. Select A regex value of to use regular expression to search for a deployment. This option is useful if you do not know the exact name of the deployment. A regular expression is a string of letters, numbers, and symbols that defines a pattern. RHACS uses this pattern to match characters or groups of characters and return results. For more information about regular expression, see "Regular-Expressions.info" in the "Additional resources" section. Deployments with labels matching exactly : Click this option to select deployments with labels that match the exact text that you enter. The label must be a valid Kubernetes label in the format of key=value . Optional: To add more deployments with names or labels that match additional criteria for inclusion, click OR and configure another exact or regular expression value. The following example provides the steps for configuring a collection for a medical application. In this example, you want your collection to include the reporting deployment, a database called patient-db , and you want to select namespaces with labels where key = kubernetes.io/metadata.name and value = medical . For this example, perform the following steps: In Collection rules , select Deployments with names matching . Click An exact value of and enter reporting . Click OR . Click A regex value of and enter .*-db to select all deployments with a name ending in db in your environment. The regex value option uses regular expression for pattern matching; for more information about regular expression, see "Regular-Expressions.info" in the Additional resources section. The panel on the right might display databases that you do not want to include. You can exclude those databases by using additional filters. For example: Filter by namespace labels by clicking Namespaces with labels matching exactly and entering kubernetes.io/metadata.name=medical to include only deployments in the namespace that is labeled medical . If you know the name of the namespace, click Namespaces with names matching and enter the name. 17.4.2. Adding attached collections Grouping collections and adding them to other collections can be useful if you want to create small collections based on deployments. You can reuse and combine those smaller collections into larger, hierarchical collections. To add additional collections to a collection that you are creating: Perform one of the following actions: Enter text in the Filter by name field and press -> to view matching results. Click the name of a collection from the Available collections list to view information about the collection, such as the name and rules for the collection and the deployments that match that collection. After viewing collection information, close the window to return to the Attached collections page. Click +Attach . The Attached collections section lists the collections that you attached. Note When you add an attached collection, the attached collection contains results based on the configured selection rules. For example, if an attached collection includes resources that would be filtered out by the rules used in the parent collection, then those items are still added to the parent collection because of the rules in the attached collection. Attached collections extend the original collection using an OR operator. Click Save . 17.5. Migration of access scopes to collections Database changes in RHACS from rocksdb to PostgreSQL are provided as a Technology Preview beginning with release 3.74 and are generally available in release 4.0. When the database is migrated from rocksdb to PostgreSQL, existing access scopes used in vulnerability reporting are migrated to collections. You can verify that the migration resulted in the correct configuration for your existing reports by navigating to Vulnerability Management Reporting and viewing the report information. The migration process creates collection objects for access scopes that were used in report configurations. RHACS generates two or more collections for a single access scope, depending on the complexity of the access scope. The generated collections for a given access scope include the following types: Embedded collections: To mimic the exact selection logic of the original access scope, RHACS generates one or more collections where matched deployments result in the same selection of clusters and namespaces as the original access scope. The collection name is in the format of System-generated embedded collection number for the scope where number is a number starting from 0. Note These embedded collections will not have any attached collections. They have cluster and namespace selection rules, but no deployment rules because the original access scopes did not filter on deployments. Root collection for the access scope: This collection is added to the report configurations. The collection name is in the format of System-generated root collection for the scope . This collection does not define any rules, but attaches one or more embedded collections. The combination of these embedded collections results in the same selection of clusters and namespaces as the original access scope. For access scopes that define cluster or namespace label selectors, RHACS can only migrate those scopes that have the 'IN' operator between the key and values. Access scopes with label selectors that were created by using the RHACS portal used the 'IN' operator by default. Migration of scopes that used the 'NOT_IN', 'EXISTS' and 'NOT_EXISTS' operators is not supported. If a collection cannot be created for an access scope, log messages are created during the migration. Log messages have the following format: You can also click the report in Vulnerability Management Reporting to view the report information page. This page contains a message if a report needs a collection attached to it. Note The original access scopes are not removed during the migration. If you created an access scope only for use in filtering vulnerability management reports, you can manually remove the access scope. 17.6. Managing collections by using the API You can configure collections by using the CollectionService API object. For example, you can use CollectionService_DryRunCollection to return a list of results equivalent to the live preview panel in the RHACS portal. For more information, go to Help API reference in the RHACS portal. Additional resources Managing RBAC in RHACS Vulnerability reporting Using regular expression: Regular-Expressions.info | [
"Failed to create collections for scope _scope-name_: Unsupported operator NOT_IN in scope's label selectors. Only operator 'IN' is supported. The scope is attached to the following report configurations: [list of report configs]; Please manually create an equivalent collection and edit the listed report configurations to use this collection. Note that reports will not function correctly until a collection is attached."
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/create-use-collections |
4.8.3. Deleting a Failover Domain | 4.8.3. Deleting a Failover Domain To delete a failover domain, follow the steps in this section. From the cluster-specific page, you can configure Failover Domains for that cluster by clicking on Failover Domains along the top of the cluster display. This displays the failover domains that have been configured for this cluster. Select the check box for the failover domain to delete. Click Delete . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-config-delete-failoverdm-conga-ca |
Chapter 1. High Availability Add-On Overview | Chapter 1. High Availability Add-On Overview The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services. The following sections provide a high-level description of the components and functions of the High Availability Add-On: Section 1.1, "Cluster Basics" Section 1.2, "High Availability Add-On Introduction" Section 1.3, "Cluster Infrastructure" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2 (part of the Resilient Storage Add-On). High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its High Availability Service Management component, rgmanager . Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Load balancing is available with the Load Balancer Add-On. High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described. Additionally, the Red Hat Enterprise Linux High Availability Add-On contains support for configuring and managing high availability servers only . It does not support high-performance clusters. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/ch.gfscs.cluster-overview-cso |
1.3. Related Documentation | 1.3. Related Documentation For more information about using Red Hat Enterprise Linux, see the following resources: Installation Guide - Documents relevant information regarding the installation of Red Hat Enterprise Linux 6. Deployment Guide - Documents relevant information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 6. Storage Administration Guide - Provides instructions on how to effectively manage storage devices and file systems on Red Hat Enterprise Linux 6. For more information about the High Availability Add-On and the Resilient Storage Add-On for Red Hat Enterprise Linux 6, see the following resources: High Availability Add-On Overview - Provides a high-level overview of the Red Hat High Availability Add-On. Cluster Administration - Provides information about installing, configuring and managing the Red Hat High Availability Add-On, Global File System 2: Configuration and Administration - Provides information about installing, configuring, and maintaining Red Hat GFS2 (Red Hat Global File System 2), which is included in the Resilient Storage Add-On. DM Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 6. Load Balancer Administration - Provides information on configuring high-performance systems and services with the Load Balancer Add-On, a set of integrated software components that provide Linux Virtual Servers (LVS) for balancing IP load across a set of real servers. Release Notes - Provides information about the current release of Red Hat products. Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at https://access.redhat.com/site/documentation/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/related_documentation-clvm |
2.8. ss | 2.8. ss ss is a command-line utility that prints statistical information about sockets, allowing administrators to assess device performance over time. By default, ss lists open non-listening TCP sockets that have established connections, but a number of useful options are provided to help administrators filter out statistics about specific sockets. Red Hat recommends using ss over netstat in Red Hat Enterprise Linux 7. One common usage is ss -tmpie which displays detailed information (including internal information) about TCP sockets, memory usage, and processes using the socket. ss is provided by the iproute package. For more information, see the man page: | [
"man ss"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-ss |
Chapter 7. Native mode | Chapter 7. Native mode For additional information about compiling and testing application in native mode, see Producing a native executable in the Compiling your Quarkus applications to native executables guide. 7.1. Character encodings By default, not all Charsets are available in native mode. Charset.defaultCharset(), US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16 If you expect your application to need any encoding not included in this set or if you see an UnsupportedCharsetException thrown in the native mode, please add the following entry to your application.properties : quarkus.native.add-all-charsets = true See also quarkus.native.add-all-charsets in Quarkus documentation. 7.2. Locale By default, only the building JVM default locale is included in the native image. Quarkus provides a way to set the locale via application.properties , so that you do not need to rely on LANG and LC_* environement variables: quarkus.native.user-country=US quarkus.native.user-language=en There is also support for embedding multiple locales into the native image and for selecting the default locale via Mandrel command line options -H:IncludeLocales=fr,en , H:+IncludeAllLocales and -H:DefaultLocale=de . You can set those via the Quarkus quarkus.native.additional-build-args property. 7.3. Embedding resources in the native executable Resources accessed via Class.getResource() , Class.getResourceAsStream() , ClassLoader.getResource() , ClassLoader.getResourceAsStream() , etc. at runtime need to be explicitly listed for including in the native executable. This can be done using Quarkus quarkus.native.resources.includes and quarkus.native.resources.excludes properties in application.properties file as demonstrated below: quarkus.native.resources.includes = docs/*,images/* quarkus.native.resources.excludes = docs/ignored.adoc,images/ignored.png In the example above, resources named docs/included.adoc and images/included.png would be embedded in the native executable while docs/ignored.adoc and images/ignored.png would not. resources.includes and resources.excludes are both lists of comma separated Ant-path style glob patterns. Refer to Red Hat build of Apache Camel for Quarkus Extensions Reference for more details. 7.4. Using the onException clause in native mode When using Camel onException handling in native mode, it is your responsibility to register the exception classes for reflection. For instance, having a camel context with onException handling: onException(MyException.class).handled(true); from("direct:route-that-could-produce-my-exception").throw(MyException.class); The class mypackage.MyException should be registered for reflection. For more information, see Registering classes for reflection . 7.5. Registering classes for reflection By default, dynamic reflection is not available in native mode. Classes for which reflective access is needed, have to be registered for reflection at compile time. In many cases, application developers do not need to care because Quarkus extensions are able to detect the classes that require the reflection and register them automatically. However, in some situations, Quarkus extensions may miss some classes and it is up to the application developer to register them. There are two ways to do that: The @io.quarkus.runtime.annotations.RegisterForReflection annotation can be used to register classes on which it is used, or it can also register third party classes via its targets attribute. import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection class MyClassAccessedReflectively { } @RegisterForReflection( targets = { org.third-party.Class1.class, org.third-party.Class2.class } ) class ReflectionRegistrations { } The quarkus.camel.native.reflection options in application.properties : quarkus.camel.native.reflection.include-patterns = org.apache.commons.lang3.tuple.* quarkus.camel.native.reflection.exclude-patterns = org.apache.commons.lang3.tuple.*Triple For these options to work properly, the artifacts containing the selected classes must either contain a Jandex index ('META-INF/jandex.idx') or they must be registered for indexing using the 'quarkus.index-dependency.*' options in 'application.properties' - for example: quarkus.index-dependency.commons-lang3.group-id = org.apache.commons quarkus.index-dependency.commons-lang3.artifact-id = commons-lang3 7.6. Registering classes for serialization If serialization support is requested via quarkus.camel.native.reflection.serialization-enabled , the classes listed in CamelSerializationProcessor.BASE_SERIALIZATION_CLASSES are automatically registered for serialization. You can register more classes using @RegisterForReflection(serialization = true) . | [
"Charset.defaultCharset(), US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16",
"quarkus.native.add-all-charsets = true",
"quarkus.native.user-country=US quarkus.native.user-language=en",
"quarkus.native.resources.includes = docs/*,images/* quarkus.native.resources.excludes = docs/ignored.adoc,images/ignored.png",
"onException(MyException.class).handled(true); from(\"direct:route-that-could-produce-my-exception\").throw(MyException.class);",
"import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection class MyClassAccessedReflectively { } @RegisterForReflection( targets = { org.third-party.Class1.class, org.third-party.Class2.class } ) class ReflectionRegistrations { }",
"quarkus.camel.native.reflection.include-patterns = org.apache.commons.lang3.tuple.* quarkus.camel.native.reflection.exclude-patterns = org.apache.commons.lang3.tuple.*Triple",
"quarkus.index-dependency.commons-lang3.group-id = org.apache.commons quarkus.index-dependency.commons-lang3.artifact-id = commons-lang3"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-native-mode |
Preface | Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 3.0 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/configuring_sidecar_containers_on_cryostat/preface-cryostat |
Providing feedback on Red Hat JBoss Web Server documentation | Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/providing-direct-documentation-feedback_jboss_web_server_installation_guide |
5.9. Red Hat Enterprise Linux-Specific Information | 5.9. Red Hat Enterprise Linux-Specific Information Depending on your past system administration experience, managing storage under Red Hat Enterprise Linux is either mostly familiar or completely foreign. This section discusses aspects of storage administration specific to Red Hat Enterprise Linux. 5.9.1. Device Naming Conventions As with all Linux-like operating systems, Red Hat Enterprise Linux uses device files to access all hardware (including disk drives). However, the naming conventions for attached storage devices varies somewhat between various Linux and Linux-like implementations. Here is how these device files are named under Red Hat Enterprise Linux. Note Device names under Red Hat Enterprise Linux are determined at boot-time. Therefore, changes made to a system's hardware configuration can result in device names changing when the system reboots. Because of this, problems can result if any device name references in system configuration files are not updated appropriately. 5.9.1.1. Device Files Under Red Hat Enterprise Linux, the device files for disk drives appear in the /dev/ directory. The format for each file name depends on several aspects of the actual hardware and how it has been configured. The important points are as follows: Device type Unit Partition 5.9.1.1.1. Device Type The first two letters of the device file name refer to the specific type of device. For disk drives, there are two device types that are most common: sd -- The device is SCSI-based hd -- The device is ATA-based More information about ATA and SCSI can be found in Section 5.3.2, "Present-Day Industry-Standard Interfaces" . 5.9.1.1.2. Unit Following the two-letter device type are one or two letters denoting the specific unit. The unit designator starts with "a" for the first unit, "b" for the second, and so on. Therefore, the first hard drive on your system may appear as hda or sda . Note SCSI's ability to address large numbers of devices necessitated the addition of a second unit character to support systems with more than 26 SCSI devices attached. Therefore, the first 26 SCSI hard drives on a system would be named sda through sdz , the 26 would be named sdaa through sdaz , and so on. 5.9.1.1.3. Partition The final part of the device file name is a number representing a specific partition on the device, starting with "1." The number may be one or two digits in length, depending on the number of partitions written to the specific device. Once the format for device file names is known, it is easy to understand what each refers to. Here are some examples: /dev/hda1 -- The first partition on the first ATA drive /dev/sdb12 -- The twelfth partition on the second SCSI drive /dev/sdad4 -- The fourth partition on the thirtieth SCSI drive 5.9.1.1.4. Whole-Device Access There are instances where it is necessary to access the entire device and not just a specific partition. This is normally done when the device is not partitioned or does not support standard partitions (such as a CD-ROM drive). In these cases, the partition number is omitted: /dev/hdc -- The entire third ATA device /dev/sdb -- The entire second SCSI device However, most disk drives use partitions (more information on partitioning under Red Hat Enterprise Linux can be found in Section 5.9.6.1, "Adding Storage" ). 5.9.1.2. Alternatives to Device File Names Because adding or removing mass storage devices can result in changes to the device file names for existing devices, there is a risk of storage not being available when the system reboots. Here is an example of the sequence of events leading to this problem: The system administrator adds a new SCSI controller so that two new SCSI drives can be added to the system (the existing SCSI bus is completely full) The original SCSI drives (including the first drive on the bus: /dev/sda ) are not changed in any way The system is rebooted The SCSI drive formerly known as /dev/sda now has a new name, because the first SCSI drive on the new controller is now /dev/sda In theory, this sounds like a terrible problem. However, in practice it rarely is. It is rarely a problem for a number of reasons. First, hardware reconfigurations of this type happen rarely. Second, it is likely that the system administrator has scheduled downtime to make the necessary changes; downtimes require careful planning to ensure the work being done does not take longer than the alloted time. This planning has the side benefit of bringing to light any issues related to device name changes. However, some organizations and system configurations are more likely to run into this issue. Organizations that require frequent reconfigurations of storage to meet their needs often use hardware capable of reconfiguration without requiring downtime. Such hotpluggable hardware makes it easy to add or remove storage. But under these circumstances device naming issues can become a problem. Fortunately, Red Hat Enterprise Linux contains features that make device name changes less of a problem. 5.9.1.2.1. File System Labels Some file systems (which are discussed further in Section 5.9.2, "File System Basics" ) have the ability to store a label -- a character string that can be used to uniquely identify the data the file system contains. Labels can then be used when mounting the file system, eliminating the need to use the device name. File system labels work well; however, file system labels must be unique system-wide. If there is ever more than one file system with the same label, you may not be able to access the file system you intended to. Also note that system configurations which do not use file systems (some databases, for example) cannot take advantage of file system labels. 5.9.1.2.2. Using devlabel The devlabel software attempts to address the device naming issue in a different manner than file system labels. The devlabel software is run by Red Hat Enterprise Linux whenever the system reboots (and whenever hotpluggable devices are inserted or removed). When devlabel runs, it reads its configuration file ( /etc/sysconfig/devlabel ) to obtain the list of devices for which it is responsible. For each device on the list, there is a symbolic link (chosen by the system administrator) and the device's UUID (Universal Unique IDentifier). The devlabel command makes sure the symbolic link always refers to the originally-specified device -- even if that device's name has changed. In this way, a system administrator can configure a system to refer to /dev/projdisk instead of /dev/sda12 , for example. Because the UUID is obtained directly from the device, devlabel must only search the system for the matching UUID and update the symbolic link appropriately. For more information on devlabel , refer to the System Administrators Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-storage-rhlspec |
Chapter 53. Desktop | Chapter 53. Desktop Firefox 60.1 ESR fails to start on IBM Z and POWER JavaScript engine in the Firefox 60.1 Extended Support Release (ESR) browser was changed. As a consequence, Firefox 60.1 ESR on IBM Z and POWER architectures fails to start with a segmentation fault error message. (BZ# 1576289 , BZ#1579705) GV100GL graphics cannot use correctly more than one monitor Due to missing signed firmware for the GV100GL graphics, GV100GL cannot have more than one monitor connected. When a second monitor is connected, it is recognized, and graphics set the correct resolution, but the monitor stays in power-saving mode. To work around this problem, install the NVIDIA binary driver. As a result, the second monitor output works as expected under the described circumstances. (BZ# 1624337 ) The Files application can not burn disks in default installation The default installation of the Files application does not include the brasero-nautilus package necessary for burning CDs or DVDs. As a consequence, the Files application allows files to be dragged and dropped into CD or DVD devices but no content is burned to the CD or DVD. As a workaround, install brasero-nautilus package by: (BZ# 1600163 ) The on screen keyboard feature not visible in GTK applications After enabling the on screen keyboard feature by using the Settings - Universal Access - Typing - Screen keyboard menu, on screen keyboard is not visible to access with GIMP Toolkit (GTK) applications, such as gedit . To work around this problem, add the below line into the /etc/environment configuration file, and restart GNOME: (BZ# 1625700 ) 32- and 64-bit fwupd packages cannot be used together when installing or upgrading the system The /usr/lib/systemd/system/fwupd.service file in the fwupd packages is different for 32- and 64-bit architectures. Consequently, it is impossible to install both 32- and 64-bit fwupd packages or to upgrade a Red Hat Enterprise Linux 7.5 system with both 32- and 64-bit fwupd packages to Red Hat Enterprise Linux 7.6. To work around this problem: Either do not install multilibrary fwupd packages. Or remove the 32-bit or the 64-bit fwupd package before upgrading from Red Hat Enterprise Linux 7.5 to Red Hat Enterprise Linux 7.6. (BZ#1623466) Installation in and booting into graphical mode are not possible on Huawei servers When installing RHEL 7.6 in graphical mode on Huawei servers with AMD64 and Intel 64 processors, the screen becomes blurred and the install interface is no longer visible. After finishing the installation in console mode, the operating system cannot be booted into graphical mode. To work around this problem: 1. Add kernel command line parameter inst.xdriver=fbdev when installing the system, and install the system as server with GUI . 2. After the installation completes, reboot and add kernel command line single to make the system boot into maintenance mode. 3. Run the following commands: (BZ# 1624847 ) X.org server crashes during fast user switching The X.Org X11 qxl video driver does not emulate the leaving virtual terminal event on shutdown. Consequently, the X.Org display server terminates unexpectedly during fast user switching, and the current user session is terminated when switching a user. (BZ# 1640918 ) X.org X11 crashes on Lenovo T580 Due to a bug in the libpciaccess library, the X.org X11 server terminates unexpectedly on Lenovo T580 laptops. (BZ# 1641044 ) Soft lock-ups might occur during boot in the kernel with i915 On a rare occasion when a GM45 system has an improper firmware configuration, an incorrect DisplayPort hot-plug signal can cause the i915 driver to be overloaded on boot. Consequently, certain GM45 systems might experience very slow boot times while the video driver attempts to work around the problem. In some cases, the kernel might report soft lock-ups occurring. Customers are advised to contact their hardware vendors and request a firmware update to address this problem. (BZ#1608704) System boots to a blank screen when Xinerama is enabled When the Xinerama extension is enabled in /etc/X11/xorg.conf on a system using the nvidia/nouveau driver, the RANDR X extension gets disabled. Consequently, login screen fails to start upon boot due to the RANDR X extension being disabled. To work around this problem, do not enable Xinerama in /etc/X11/xorg.conf . (BZ# 1579257 ) | [
"yum install brasero-nautilus",
"GTK_IM_MODULE=ibus",
"-e xorg-x11-drivers -e xorg-x11-drv-vesa init 5"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known_issues_desktop |
Chapter 21. Using Ansible playbooks to manage self-service rules in IdM | Chapter 21. Using Ansible playbooks to manage self-service rules in IdM This section introduces self-service rules in Identity Management (IdM) and describes how to create and edit self-service access rules using Ansible playbooks. Self-service access control rules allow an IdM entity to perform specified operations on its IdM Directory Server entry. Self-service access control in IdM Using Ansible to ensure that a self-service rule is present Using Ansible to ensure that a self-service rule is absent Using Ansible to ensure that a self-service rule has specific attributes Using Ansible to ensure that a self-service rule does not have specific attributes 21.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 21.2. Using Ansible to ensure that a self-service rule is present The following procedure describes how to use an Ansible playbook to define self-service rules and ensure their presence on an Identity Management (IdM) server. In this example, the new Users can manage their own name details rule grants users the ability to change their own givenname , displayname , title and initials attributes. This allows them to, for example, change their display name or initials if they want to. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the new self-service rule. Set the permission variable to a comma-separated list of permissions to grant: read and write . Set the attribute variable to a list of attributes that users can manage themselves: givenname , displayname , title , and initials . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 21.3. Using Ansible to ensure that a self-service rule is absent The following procedure describes how to use an Ansible playbook to ensure a specified self-service rule is absent from your IdM configuration. The example below describes how to make sure the Users can manage their own name details self-service rule does not exist in IdM. This will ensure that users cannot, for example, change their own display name or initials. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule. Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 21.4. Using Ansible to ensure that a self-service rule has specific attributes The following procedure describes how to use an Ansible playbook to ensure that an already existing self-service rule has specific settings. In the example, you ensure the Users can manage their own name details self-service rule also has the surname member attribute. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule to modify. Set the attribute variable to surname . Set the action variable to member . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file available in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 21.5. Using Ansible to ensure that a self-service rule does not have specific attributes The following procedure describes how to use an Ansible playbook to ensure that a self-service rule does not have specific settings. You can use this playbook to make sure a self-service rule does not grant undesired access. In the example, you ensure the Users can manage their own name details self-service rule does not have the givenname and surname member attributes. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule you want to modify. Set the attribute variable to givenname and surname . Set the action variable to member . Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory | [
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-present.yml selfservice-present-copy.yml",
"--- - name: Self-service present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" permission: read, write attribute: - givenname - displayname - title - initials",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-absent.yml selfservice-absent-copy.yml",
"--- - name: Self-service absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-present.yml selfservice-member-present-copy.yml",
"--- - name: Self-service member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attribute surname is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - surname action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-absent.yml selfservice-member-absent-copy.yml",
"--- - name: Self-service member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attributes givenname and surname are absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - givenname - surname action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-absent-copy.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ansible-playbooks-to-manage-self-service-rules-in-idm_managing-users-groups-hosts |
Chapter 7. Available BPF Features | Chapter 7. Available BPF Features This chapter provides the complete list of Berkeley Packet Filter ( BPF ) features available in the kernel of this minor version of Red Hat Enterprise Linux 9. The tables include the lists of: System configuration and other options Available program types and supported helpers Available map types This chapter contains automatically generated output of the bpftool feature command. Table 7.1. System configuration and other options Option Value unprivileged_bpf_disabled 2 (bpf() syscall restricted to privileged users, admin can change) JIT compiler 1 (enabled) JIT compiler hardening 1 (enabled for unprivileged users) JIT compiler kallsyms exports 1 (enabled for root) Memory limit for JIT for unprivileged users 264241152 CONFIG_BPF y CONFIG_BPF_SYSCALL y CONFIG_HAVE_EBPF_JIT y CONFIG_BPF_JIT y CONFIG_BPF_JIT_ALWAYS_ON y CONFIG_DEBUG_INFO_BTF y CONFIG_DEBUG_INFO_BTF_MODULES y CONFIG_CGROUPS y CONFIG_CGROUP_BPF y CONFIG_CGROUP_NET_CLASSID y CONFIG_SOCK_CGROUP_DATA y CONFIG_BPF_EVENTS y CONFIG_KPROBE_EVENTS y CONFIG_UPROBE_EVENTS y CONFIG_TRACING y CONFIG_FTRACE_SYSCALLS y CONFIG_FUNCTION_ERROR_INJECTION y CONFIG_BPF_KPROBE_OVERRIDE n CONFIG_NET y CONFIG_XDP_SOCKETS y CONFIG_LWTUNNEL_BPF y CONFIG_NET_ACT_BPF m CONFIG_NET_CLS_BPF m CONFIG_NET_CLS_ACT y CONFIG_NET_SCH_INGRESS m CONFIG_XFRM y CONFIG_IP_ROUTE_CLASSID y CONFIG_IPV6_SEG6_BPF n CONFIG_BPF_LIRC_MODE2 n CONFIG_BPF_STREAM_PARSER y CONFIG_NETFILTER_XT_MATCH_BPF m CONFIG_BPFILTER n CONFIG_BPFILTER_UMH n CONFIG_TEST_BPF m CONFIG_HZ 1000 bpf() syscall available Large program size limit available Table 7.2. Available program types and supported helpers Program type Available helpers socket_filter bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock kprobe bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot sched_cls bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock sched_act bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot xdp bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_redirect, bpf_perf_event_output, bpf_csum_diff, bpf_get_current_task, bpf_get_numa_node_id, bpf_xdp_adjust_head, bpf_redirect_map, bpf_xdp_adjust_meta, bpf_xdp_adjust_tail, bpf_fib_lookup, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock perf_event bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot cgroup_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_skb_cgroup_id, bpf_get_local_storage, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock cgroup_sock bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs lwt_in bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock lwt_out bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock lwt_xmit bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock sock_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_sock_map_update, bpf_getsockopt, bpf_sock_ops_cb_flags_set, bpf_sock_hash_update, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock sk_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_adjust_room, bpf_sk_redirect_map, bpf_sk_redirect_hash, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock cgroup_device bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs sk_msg bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_msg_redirect_hash, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock raw_tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot cgroup_sock_addr bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_getsockopt, bpf_bind, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock lwt_seg6local bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock lirc_mode2 not supported sk_reuseport bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_skb_load_bytes_relative, bpf_sk_select_reuseport, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs flow_dissector bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock cgroup_sysctl bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs raw_tracepoint_writable bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_task_pt_regs, bpf_get_branch_snapshot cgroup_sockopt bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs tracing not supported struct_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_perf_event_read, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_stackid, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_xdp_adjust_head, bpf_probe_read_str, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_setsockopt, bpf_skb_adjust_room, bpf_redirect_map, bpf_sk_redirect_map, bpf_sock_map_update, bpf_xdp_adjust_meta, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_getsockopt, bpf_override_return, bpf_sock_ops_cb_flags_set, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_bind, bpf_xdp_adjust_tail, bpf_skb_get_xfrm_state, bpf_get_stack, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_sock_hash_update, bpf_msg_redirect_hash, bpf_sk_redirect_hash, bpf_lwt_push_encap, bpf_lwt_seg6_store_bytes, bpf_lwt_seg6_adjust_srh, bpf_lwt_seg6_action, bpf_rc_repeat, bpf_rc_keydown, bpf_skb_cgroup_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_select_reuseport, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_rc_pointer_rel, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_send_signal, bpf_tcp_gen_syncookie, bpf_skb_output, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_tcp_send_ack, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_xdp_output, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_seq_printf, bpf_seq_write, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_get_task_stack, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_inode_storage_get, bpf_inode_storage_delete, bpf_d_path, bpf_copy_from_user, bpf_snprintf_btf, bpf_seq_printf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_bprm_opts_set, bpf_ktime_get_coarse_ns, bpf_ima_inode_hash, bpf_sock_from_file, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_sys_bpf, bpf_btf_find_by_name_kind, bpf_sys_close, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_get_func_ip, bpf_get_attach_cookie, bpf_task_pt_regs, bpf_get_branch_snapshot, bpf_skc_to_unix_sock, bpf_kallsyms_lookup_name ext not supported lsm not supported sk_lookup bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf, bpf_timer_init, bpf_timer_set_callback, bpf_timer_start, bpf_timer_cancel, bpf_task_pt_regs, bpf_skc_to_unix_sock Table 7.3. Available map types Map type Available hash yes array yes prog_array yes perf_event_array yes percpu_hash yes percpu_array yes stack_trace yes cgroup_array yes lru_hash yes lru_percpu_hash yes lpm_trie yes array_of_maps yes hash_of_maps yes devmap yes sockmap yes cpumap yes xskmap yes sockhash yes cgroup_storage yes reuseport_sockarray yes percpu_cgroup_storage yes queue yes stack yes sk_storage yes devmap_hash yes struct_ops no ringbuf yes inode_storage yes task_storage yes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/available_bpf_features |
Chapter 7. Enabling the Red Hat Virtualization Manager Repositories | Chapter 7. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream The Red Hat Virtualization Manager has been migrated to a self-hosted engine setup. The Manager is now operating on a virtual machine on the new self-hosted engine node. The hosts will be running in the new environment, but cannot host the Manager virtual machine. You can convert some or all of these hosts to self-hosted engine nodes. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/Enabling_the_Red_Hat_Virtualization_Manager_Repositories_migrating_to_SHE |
Chapter 4. Performance impact of alt-java | Chapter 4. Performance impact of alt-java The alt-java binary contains the SSB mitigation, so the SSB mitigation performance impact no longer exists on java . Note Using alt-java might significantly reduce the performance of Java programs. You can find detailed information of some Java performance issues that might exist with using alt-java by selecting any of the Red Hat Bugzilla links listed in the Additional resources section. Additional resources (java-11-openjdk) Seccomp related performance regression in RHEL8 . (java-1.8.0-openjdk) Seccomp related performance regression in RHEL8 . CVE-2018-3639 Detail . CVE-2018-3639 hw: cpu: speculative store bypass . CVE-2018-3639 java-1.8.0-openjdk: hw: cpu: speculative store bypass (rhel-7.6) Revised on 2024-05-10 09:08:04 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_alt-java_with_red_hat_build_of_openjdk/altjava-performance-impact |
16.3. Installation | 16.3. Installation To install libguestfs, guestfish, the libguestfs tools, guestmount and support for Windows guest virtual machines, subscribe to the Red Hat Enterprise Linux V2WIN channel, go to the Red Hat Website and run the following command: To install every libguestfs-related package including the language bindings, run the following command: | [
"yum install libguestfs guestfish libguestfs-tools libguestfs-winsupport",
"yum install '*guestf*'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-guide-guest_disks_libguestfs-installation |
Chapter 114. Ref | Chapter 114. Ref Both producer and consumer are supported The Ref component is used for lookup of existing endpoints bound in the Registry. 114.1. Dependencies When using ref with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency> 114.2. URI format Where someName is the name of an endpoint in the Registry (usually, but not always, the Spring registry). If you are using the Spring registry, someName would be the bean ID of an endpoint in the Spring registry. 114.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 114.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 114.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 114.4. Component Options The Ref component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 114.5. Endpoint Options The Ref endpoint is configured using URI syntax: with the following path and query parameters: 114.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of endpoint to lookup in the registry. String 114.5.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 114.6. Runtime lookup This component can be used when you need dynamic discovery of endpoints in the Registry where you can compute the URI at runtime. Then you can look up the endpoint using the following code: // lookup the endpoint String myEndpointRef = "bigspenderOrder"; Endpoint endpoint = context.getEndpoint("ref:" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange); And you could have a list of endpoints defined in the Registry such as: <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <endpoint id="normalOrder" uri="activemq:order.slow"/> <endpoint id="bigspenderOrder" uri="activemq:order.high"/> </camelContext> 114.7. Sample In the sample below we use the ref: in the URI to reference the endpoint with the spring ID, endpoint2 : You could, of course, have used the ref attribute instead: <to uri="ref:endpoint2"/> Which is the more common way to write it. 114.8. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.ref.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ref.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.ref.enabled Whether to enable auto configuration of the ref component. This is enabled by default. Boolean camel.component.ref.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency>",
"ref:someName[?options]",
"ref:name",
"// lookup the endpoint String myEndpointRef = \"bigspenderOrder\"; Endpoint endpoint = context.getEndpoint(\"ref:\" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange);",
"<camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <endpoint id=\"normalOrder\" uri=\"activemq:order.slow\"/> <endpoint id=\"bigspenderOrder\" uri=\"activemq:order.high\"/> </camelContext>",
"<to uri=\"ref:endpoint2\"/>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-ref-component-starter |
Chapter 6. Configure storage for OpenShift Container Platform services | Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. | [
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/configure_storage_for_openshift_container_platform_services |
Chapter 1. Support overview | Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: Viewing the status of the Red Hat OpenShift Logging Operator Viewing the status of logging components Troubleshooting logging alerts Collecting information about your logging environment by using the oc adm must-gather command OpenShift CLI ( oc ) issues : Investigate OpenShift CLI ( oc ) issues by increasing the log level. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/support/support-overview |
Chapter 15. Managing system clocks to satisfy application needs | Chapter 15. Managing system clocks to satisfy application needs Multiprocessor systems such as NUMA or SMP have multiple instances of hardware clocks. During boot time the kernel discovers the available clock sources and selects one to use. To improve performance, you can change the clock source used to meet the minimum requirements of a real-time system. 15.1. Hardware clocks Multiple instances of clock sources found in multiprocessor systems, such as non-uniform memory access (NUMA) and Symmetric multiprocessing (SMP), interact among themselves and the way they react to system events, such as CPU frequency scaling or entering energy economy modes, determine whether they are suitable clock sources for the real-time kernel. The preferred clock source is the Time Stamp Counter (TSC). If the TSC is not available, the High Precision Event Timer (HPET) is the second best option. However, not all systems have HPET clocks, and some HPET clocks can be unreliable. In the absence of TSC and HPET, other options include the ACPI Power Management Timer (ACPI_PM), the Programmable Interval Timer (PIT), and the Real Time Clock (RTC). The last two options are either costly to read or have a low resolution (time granularity), therefore they are sub-optimal for use with the real-time kernel. 15.2. Viewing the available clock sources in your system The list of available clock sources in your system is in the /sys/devices/system/clocksource/clocksource0/available_clocksource file. Procedure Display the available_clocksource file. In this example, the available clock sources in the system are TSC, HPET, and ACPI_PM. 15.3. Viewing the clock source currently in use The currently used clock source in your system is stored in the /sys/devices/system/clocksource/clocksource0/current_clocksource file. Procedure Display the current_clocksource file. In this example, the current clock source in the system is TSC. 15.4. Temporarily changing the clock source to use Sometimes the best-performing clock for a system's main application is not used due to known problems on the clock. After ruling out all problematic clocks, the system can be left with a hardware clock that is unable to satisfy the minimum requirements of a real-time system. Requirements for crucial applications vary on each system. Therefore, the best clock for each application, and consequently each system, also varies. Some applications depend on clock resolution, and a clock that delivers reliable nanoseconds readings can be more suitable. Applications that read the clock too often can benefit from a clock with a smaller reading cost (the time between a read request and the result). In these cases it is possible to override the clock selected by the kernel, provided that you understand the side effects of the override and can create an environment which will not trigger the known shortcomings of the given hardware clock. Important The kernel automatically selects the best available clock source. Overriding the selected clock source is not recommended unless the implications are well understood. Prerequisites You have root permissions on the system. Procedure View the available clock sources. As an example, consider the available clock sources in the system are TSC, HPET, and ACPI_PM. Write the name of the clock source you want to use to the /sys/devices/system/clocksource/clocksource0/current_clocksource file. Note The changes apply to the clock source currently in use. When the system reboots, the default clock is used. To make the change persistent, see Making persistent kernel tuning parameter changes . Verification Display the current_clocksource file to ensure that the current clock source is the specified clock source. The example uses HPET as the current clock source in the system. 15.5. Comparing the cost of reading hardware clock sources You can compare the speed of the clocks in your system. Reading from the TSC involves reading a register from the processor. Reading from the HPET clock involves reading a memory area. Reading from the TSC is faster, which provides a significant performance advantage when timestamping hundreds of thousands of messages per second. Prerequisites You have root permissions on the system. The clock_timing program must be on the system. For more information, see the clock_timing program . Procedure Change to the directory in which the clock_timing program is saved. View the available clock sources in your system. In this example, the available clock sources in the system are TSC , HPET , and ACPI_PM . View the currently used clock source. In this example, the current clock source in the system is TSC . Run the time utility in conjunction with the ./ clock_timing program. The output displays the duration required to read the clock source 10 million times. The example shows the following parameters: real - The total time spent beginning from program invocation until the process ends. real includes user and kernel times, and will usually be larger than the sum of the latter two. If this process is interrupted by an application with higher priority, or by a system event such as a hardware interrupt (IRQ), this time spent waiting is also computed under real . user - The time the process spent in user space performing tasks that did not require kernel intervention. sys - The time spent by the kernel while performing tasks required by the user process. These tasks include opening files, reading and writing to files or I/O ports, memory allocation, thread creation, and network related activities. Write the name of the clock source you want to test to the /sys/devices/system/clocksource/clocksource0/current_clocksource file. In this example, the current clock source is changed to HPET . Repeat steps 4 and 5 for all of the available clock sources. Compare the results of step 4 for all of the available clock sources. Additional resources time(1) man page on your system 15.6. Synchronizing the TSC timer on Opteron CPUs The current generation of AMD64 Opteron processors can be susceptible to a large gettimeofday skew. This skew occurs when both cpufreq and the Time Stamp Counter (TSC) are in use. RHEL for Real Time provides a method to prevent this skew by forcing all processors to simultaneously change to the same frequency. As a result, the TSC on a single processor never increments at a different rate than the TSC on another processor. Prerequisites You have root permissions on the system. Procedure Enable the clocksource=tsc and powernow-k8.tscsync=1 kernel options: This forces the use of TSC and enables simultaneous core processor frequency transitions. Restart the machine. Additional resources gettimeofday(2) man page on your system 15.7. The clock_timing program The clock_timing program reads the current clock source 10 million times. In conjunction with the time utility it measures the amount of time needed to do this. Procedure To create the clock_timing program: Create a directory for the program files. Change to the created directory. Create a source file and open it in a text editor. Enter the following into the file: Save the file and exit the editor. Compile the file. The clock_timing program is ready and can be run from the directory in which it is saved. | [
"cat /sys/devices/system/clocksource/clocksource0/available_clocksource tsc hpet acpi_pm",
"cat /sys/devices/system/clocksource/clocksource0/current_clocksource tsc",
"cat /sys/devices/system/clocksource/clocksource0/available_clocksource tsc hpet acpi_pm",
"echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource",
"cat /sys/devices/system/clocksource/clocksource0/current_clocksource hpet",
"cd clock_test",
"cat /sys/devices/system/clocksource/clocksource0/available_clocksource tsc hpet acpi_pm",
"cat /sys/devices/system/clocksource/clocksource0/current_clocksource tsc",
"time ./clock_timing real 0m0.601s user 0m0.592s sys 0m0.002s",
"echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource",
"grubby --update-kernel=ALL --args=\"clocksource=tsc powernow-k8.tscsync=1\"",
"mkdir clock_test",
"cd clock_test",
"{EDITOR} clock_timing.c",
"#include <time.h> void main() { int rc; long i; struct timespec ts; for(i=0; i<10000000; i++) { rc = clock_gettime(CLOCK_MONOTONIC, &ts); } }",
"gcc clock_timing.c -o clock_timing -lrt"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/managing-system-clocks-to-satisfy-application-needs_optimizing-rhel9-for-real-time-for-low-latency-operation |
Chapter 5. Upgrading CodeReady Workspaces | Chapter 5. Upgrading CodeReady Workspaces This chapter describes how to upgrade a CodeReady Workspaces instance from version 2.14 to CodeReady Workspaces 2.15. The method used to install the CodeReady Workspaces instance determines the method to proceed with for the upgrade: Section 5.1, "Upgrading CodeReady Workspaces using OperatorHub" Section 5.2, "Upgrading CodeReady Workspaces using the CLI management tool" Section 5.3, "Upgrading CodeReady Workspaces using the CLI management tool in restricted environment" A CodeReady Workspaces upgrade can be rolled back: Section 5.5, "Rolling back a CodeReady Workspaces upgrade" 5.1. Upgrading CodeReady Workspaces using OperatorHub This section describes how to upgrade from an earlier minor version using the Operator from OperatorHub in the OpenShift web console. OperatorHub supports Automatic and Manual upgrade strategies: Automatic :: The upgrade process starts when a new version of the Operator is published. Manual :: The update must be manually approved every time the new version of the Operator is published. 5.1.1. Specifying the approval strategy of CodeReady Workspaces in OperatorHub Prerequisites An administrator account on an instance of OpenShift. An instance of CodeReady Workspaces 2.14 or earlier that was installed by using OperatorHub. Procedure Open the OpenShift web console. Navigate to the Operators Installed Operators page. Click Red Hat CodeReady Workspaces in the list of installed Operators. Navigate to the Subscription tab. Configure the approval strategy to Automatic or Manual . 5.1.2. Manually upgrading CodeReady Workspaces in OperatorHub OperatorHub is an assembly point for sharing Operators. The OperatorHub helps you deploy and update applications. The following section describes the process of upgrading CodeReady Workspaces by using OperatorHub and the Manual approval strategy approach. Use the Manual approval strategy to prevent automatic updates of the Operator with every release. Prerequisites An administrator account on an instance of OpenShift. An instance of CodeReady Workspaces 2.14 or earlier that was installed by using OperatorHub. The approval strategy in the subscription is Manual . Procedure Open the OpenShift web console. Navigate to the Operators Installed Operators page. In the list of the installed Operators, click Red Hat CodeReady Workspaces . Navigate to the Subscription tab. to the Upgrade Status , inspect the upgrades that require approval. The expected message is 1 requires approval . Click 1 requires approval . Click Preview Install Plan . Review the resources that are available for upgrade and click Approve . Verification steps Navigate to the Operators Installed Operators page. Monitor the upgrade progress. When complete, the status changes to Succeeded and Up to date . The 2.15 version number is visible at the end of the page. Additional resources Upgrading installed Operators section in the OpenShift documentation. 5.2. Upgrading CodeReady Workspaces using the CLI management tool This section describes how to upgrade from the minor version using the CLI management tool. Prerequisites An administrative account on OpenShift. A running instance of a minor version of Red Hat CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, in the <openshift-workspaces> project. crwctl is available and updated. See Section 3.3.1, "Installing the crwctl CLI management tool" . Procedure Save and push changes back to the Git repositories for all running CodeReady Workspaces 2.14 workspaces. Shut down all workspaces in the CodeReady Workspaces 2.14 instance. Upgrade CodeReady Workspaces: Note For slow systems or internet connections, add the --k8spodwaittimeout=1800000 flag option to the crwctl server:update command to extend the Pod timeout period to 1800000 ms or longer. Verification steps Navigate to the CodeReady Workspaces instance. The 2.15 version number is visible at the bottom of the page. 5.3. Upgrading CodeReady Workspaces using the CLI management tool in restricted environment This section describes how to upgrade Red Hat CodeReady Workspaces using the CLI management tool in restricted environment. The upgrade path supports minor version update, from CodeReady Workspaces version 2.14 to version 2.15. Prerequisites An administrative account on an instance of OpenShift. A running instance version 2.14 of Red Hat CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, with the crwctl --installer operator method, in the <openshift-workspaces> project. See Section 3.4, "Installing CodeReady Workspaces in a restricted environment" . The crwctl 2.15 management tool is available. See Section 3.3.1, "Installing the crwctl CLI management tool" . 5.3.1. Understanding network connectivity in restricted environments CodeReady Workspaces requires that each OpenShift Route created for CodeReady Workspaces is accessible from inside the OpenShift cluster. These CodeReady Workspaces components have a OpenShift Route: codeready-workspaces-server , keycloak , devfile-registry , plugin-registry . Consider the network topology of the environment to determine how best to accomplish this. Example 5.1. Network owned by a company or an organization, disconnected from the public Internet The network administrators must ensure that it is possible to route traffic bound from the cluster to OpenShift Route host names. Example 5.2. Private subnetwork in a cloud provider Create a proxy configuration allowing the traffic to leave the node to reach an external-facing Load Balancer. 5.3.2. Building offline registry images 5.3.2.1. Building an offline devfile registry image This section describes how to build an offline devfile registry image. Starting workspaces without relying on resources from the outside Internet requires building this image. The image contains all sample projects referenced in devfiles as zip files. Prerequisites: A running installation of podman or docker . Procedure Clone the devfile registry repository and check out the version to deploy: Build an offline devfile registry image: Note To display full options for the build.sh script, use the --help parameter. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#customizing-the-registries.adoc . 5.3.2.2. Building an offline plug-in registry image This section describes how to build an offline plug-in registry image. Starting workspaces without relying on resources from the outside Internet requires building this image. The image contains plug-in metadata and all plug-in or extension artifacts. Prerequisites Node.js 12.x A running version of yarn. See Installing Yarn . ./node_modules/.bin is in the PATH environment variable. A running installation of podman or docker . Procedure Clone the plug-in registry repository and check out the version to deploy: Build offline plug-in registry image: Note To display full options for the build.sh script, use the --help parameter. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#customizing-the-registries.adoc . 5.3.3. Preparing an private registry Prerequisites The oc tool is available. The skopeo tool, version 0.1.40 or later, is available. The podman tool is available. An image registry accessible from the OpenShift cluster and supporting the format of the V2 image manifest, schema version 2. Ensure you can push to it from a location having, at least temporarily, access to the internet. Table 5.1. Placeholders used in examples <source-image> Full coordinates of the source image, including registry, organization, and digest. <target-registry> Host name and port of the target container-image registry. <target-organization> Organization in the target container-image registry <target-image> Image name and digest in the target container-image registry. <target-user> User name in the target container-image registry. <target-password> User password in the target container-image registry. Procedure Log into the internal image registry: Note If you encounter an error, like x509: certificate signed by unknown authority , when attempting to push to the internal registry, try one of these workarounds: add the OpenShift cluster's certificate to /etc/containers/certs.d/ <target-registry> add the registry as an insecure registry by adding the following lines to the Podman configuration file located at /etc/containers/registries.conf : Copy images without changing their digest. Repeat this step for every image in the following table: Note Table 5.2. Understanding the usage of the container-images from the prefix or keyword they include in their name Usage Prefix or keyword Essential not stacks- or plugin- Workspaces stacks- , plugin- Table 5.3. Images to copy in the private registry <source-image> <target-image> registry.redhat.io/codeready-workspaces/backup-rhel8@sha256:6b636d6bba6c509756803c4186960ed69adfa2eae42dde5af48b67c6e0794915 backup-rhel8@sha256:6b636d6bba6c509756803c4186960ed69adfa2eae42dde5af48b67c6e0794915 registry.redhat.io/codeready-workspaces/configbump-rhel8@sha256:15574551ec79aa8cd9e0eea3d379fc7a77a4e16cc92f937b4a89c6f9a6f2ce40 configbump-rhel8@sha256:15574551ec79aa8cd9e0eea3d379fc7a77a4e16cc92f937b4a89c6f9a6f2ce40 registry.redhat.io/codeready-workspaces/crw-2-rhel8-operator@sha256:1c8b06e457ba008f86e5fefc82013acdb4639317cde809e926931d202a194a17 crw-2-rhel8-operator@sha256:1c8b06e457ba008f86e5fefc82013acdb4639317cde809e926931d202a194a17 registry.redhat.io/codeready-workspaces/dashboard-rhel8@sha256:e46636f0c66221d9a01506b114e18f3d3afe61da99bf224e71cf4051235e51ac dashboard-rhel8@sha256:e46636f0c66221d9a01506b114e18f3d3afe61da99bf224e71cf4051235e51ac registry.redhat.io/codeready-workspaces/devfileregistry-rhel8@sha256:fdcd72766757f08486a6e4dbf3a24bf084aefdb2e86971c440048aec0315d7e8 devfileregistry-rhel8@sha256:fdcd72766757f08486a6e4dbf3a24bf084aefdb2e86971c440048aec0315d7e8 registry.redhat.io/codeready-workspaces/idea-rhel8@sha256:eff6db1da4c9743ff77b91acf08378bce6a652826b3b252512e63f767de07785 idea-rhel8@sha256:eff6db1da4c9743ff77b91acf08378bce6a652826b3b252512e63f767de07785 registry.redhat.io/codeready-workspaces/jwtproxy-rhel8@sha256:6176e28c4c02f0a40f8192088ccb505ce5722258bcaab0addff9bafa310c1ca4 jwtproxy-rhel8@sha256:6176e28c4c02f0a40f8192088ccb505ce5722258bcaab0addff9bafa310c1ca4 registry.redhat.io/codeready-workspaces/machineexec-rhel8@sha256:dc0e082c9522158cb12345b1d184c3803d8a4a63a7189940e853e51557e43acf machineexec-rhel8@sha256:dc0e082c9522158cb12345b1d184c3803d8a4a63a7189940e853e51557e43acf registry.redhat.io/codeready-workspaces/plugin-java11-rhel8@sha256:315273182e1f4dc884365fc3330ada3937b40369f3faf7762847ec433c3ac537 plugin-java11-rhel8@sha256:315273182e1f4dc884365fc3330ada3937b40369f3faf7762847ec433c3ac537 registry.redhat.io/codeready-workspaces/plugin-java8-rhel8@sha256:8cb1e495825051b83cf903bb317e55823a6f57b3bad92e9407dc8fa59c24c0cc plugin-java8-rhel8@sha256:8cb1e495825051b83cf903bb317e55823a6f57b3bad92e9407dc8fa59c24c0cc registry.redhat.io/codeready-workspaces/plugin-kubernetes-rhel8@sha256:75fe8823dea867489b68169b764dc8b0b03290a456e9bfec5fe0cc413eec7355 plugin-kubernetes-rhel8@sha256:75fe8823dea867489b68169b764dc8b0b03290a456e9bfec5fe0cc413eec7355 registry.redhat.io/codeready-workspaces/plugin-openshift-rhel8@sha256:d7603582f7ace76283641809b0c61dbcb78621735e536b789428e5a910d35af3 plugin-openshift-rhel8@sha256:d7603582f7ace76283641809b0c61dbcb78621735e536b789428e5a910d35af3 registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8@sha256:6d13003539fcbda201065eae2e66dc67fed007ba3ba41fb3b8ec841650c52bc2 pluginbroker-artifacts-rhel8@sha256:6d13003539fcbda201065eae2e66dc67fed007ba3ba41fb3b8ec841650c52bc2 registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8@sha256:de8ede01ce5d3b06ae8b1866bb482bb937f020f7dee5dfb20b041f02c1e63f68 pluginbroker-metadata-rhel8@sha256:de8ede01ce5d3b06ae8b1866bb482bb937f020f7dee5dfb20b041f02c1e63f68 registry.redhat.io/codeready-workspaces/pluginregistry-rhel8@sha256:cbb82d5bcea22d6d65644c2a4c88ce1e3a082e8a696217d6a104b67daa60384e pluginregistry-rhel8@sha256:cbb82d5bcea22d6d65644c2a4c88ce1e3a082e8a696217d6a104b67daa60384e registry.redhat.io/codeready-workspaces/server-rhel8@sha256:e1694549ca2af22a1d1780cc7d92bb0829a411f74377f825eab3e0fba7c020d9 server-rhel8@sha256:e1694549ca2af22a1d1780cc7d92bb0829a411f74377f825eab3e0fba7c020d9 registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8@sha256:c2f38140f52112b2a7688c2a179afcaa930ad6216925eb322cfd9634a71cfc13 stacks-cpp-rhel8@sha256:c2f38140f52112b2a7688c2a179afcaa930ad6216925eb322cfd9634a71cfc13 registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8@sha256:f48fe1caa5be1ae91140681bee159ca8b11dc687fa50fbf9dc5644f4852bf5c8 stacks-dotnet-rhel8@sha256:f48fe1caa5be1ae91140681bee159ca8b11dc687fa50fbf9dc5644f4852bf5c8 registry.redhat.io/codeready-workspaces/stacks-golang-rhel8@sha256:db76d04752973223e2c0de9401ebf06b84263e1bb6d29f1455daaff0cb39c1b3 stacks-golang-rhel8@sha256:db76d04752973223e2c0de9401ebf06b84263e1bb6d29f1455daaff0cb39c1b3 registry.redhat.io/codeready-workspaces/stacks-php-rhel8@sha256:d120c41ee8dd80fb960dd4c1657bede536d32f13f3c3ca84e986a830ec2ead3b stacks-php-rhel8@sha256:d120c41ee8dd80fb960dd4c1657bede536d32f13f3c3ca84e986a830ec2ead3b registry.redhat.io/codeready-workspaces/theia-endpoint-rhel8@sha256:5d26cf000924716d8d03969121a4c636e7fc8ef08aa21148eafa28a2c4aeaff7 theia-endpoint-rhel8@sha256:5d26cf000924716d8d03969121a4c636e7fc8ef08aa21148eafa28a2c4aeaff7 registry.redhat.io/codeready-workspaces/theia-rhel8@sha256:6000d00ef1029583642c01fec588f92addb95f16d56d0c23991a8f19314b0f06 theia-rhel8@sha256:6000d00ef1029583642c01fec588f92addb95f16d56d0c23991a8f19314b0f06 registry.redhat.io/codeready-workspaces/traefik-rhel8@sha256:70215465e2ad65a61d1b5401378532a3a10aa60afdda0702fb6061d89b8ba3be traefik-rhel8@sha256:70215465e2ad65a61d1b5401378532a3a10aa60afdda0702fb6061d89b8ba3be registry.redhat.io/devworkspace/devworkspace-rhel8-operator@sha256:3f96fb70c3f56dea3384ea31b9252a5c6aca8e0f33dc53be590f134912244078 devworkspacedevworkspace-rhel8-operator@sha256:3f96fb70c3f56dea3384ea31b9252a5c6aca8e0f33dc53be590f134912244078 registry.redhat.io/jboss-eap-7/eap-xp3-openjdk11-openshift-rhel8@sha256:bb3072afdbf31ddd1071fea37ed5308db3bf8a2478b5aa5aff8373e8042d6aeb eap-xp3-openjdk11-openshift-rhel8@sha256:bb3072afdbf31ddd1071fea37ed5308db3bf8a2478b5aa5aff8373e8042d6aeb registry.redhat.io/jboss-eap-7/eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632d eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632d registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:1dc542b5ab33368443f698305a90c617385b4e9b101acc4acc0aa7b4bf58a292 openshift4ose-kube-rbac-proxy@sha256:1dc542b5ab33368443f698305a90c617385b4e9b101acc4acc0aa7b4bf58a292 registry.redhat.io/openshift4/ose-oauth-proxy@sha256:83988048d5f585ca936442963e23b77520e1e4d8c3d5b8160e43ae834a24b720 openshift4ose-oauth-proxy@sha256:83988048d5f585ca936442963e23b77520e1e4d8c3d5b8160e43ae834a24b720 registry.redhat.io/rh-sso-7/sso75-openshift-rhel8@sha256:dd4ea229521fb58dda7e547ea6db993156f4c61aa8a00f2fd1375bb77168b6e6 sso75-openshift-rhel8@sha256:dd4ea229521fb58dda7e547ea6db993156f4c61aa8a00f2fd1375bb77168b6e6 registry.redhat.io/rhel8/postgresql-13@sha256:6032adb3eac903ee8aa61f296ca9aaa57f5709e5673504b609222e042823f195 postgresql-13@sha256:6032adb3eac903ee8aa61f296ca9aaa57f5709e5673504b609222e042823f195 registry.redhat.io/rhel8/postgresql-96@sha256:314747a4a64ac16c33ead6a34479dccf16b9a07abf440ea7eeef7cda4cd19e32 postgresql-96@sha256:314747a4a64ac16c33ead6a34479dccf16b9a07abf440ea7eeef7cda4cd19e32 registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73 mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73 registry.redhat.io/ubi8/ubi-minimal@sha256:2e4bbb2be6e7aff711ddc93f0b07e49c93d41e4c2ffc8ea75f804ad6fe25564e ubi8ubi-minimal@sha256:2e4bbb2be6e7aff711ddc93f0b07e49c93d41e4c2ffc8ea75f804ad6fe25564e Verification steps Verify the images have the same digests: Additional resources To find the sources of the images list, see the values of the relatedImages attribute in the link: - CodeReady Workspaces Operator ClusterServiceVersion sources . 5.3.4. Upgrading CodeReady Workspaces using the CLI management tool in restricted environment This section describes how to upgrade Red Hat CodeReady Workspaces using the CLI management tool in restricted environment. Prerequisites An administrative account on an OpenShift instance. A running instance version 2.14 of Red Hat CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, with the crwctl --installer operator method, in the <openshift-workspaces> project. See Section 3.4, "Installing CodeReady Workspaces in a restricted environment" . Essential container images are available to the CodeReady Workspaces server running in the cluster. See Section 5.3.3, "Preparing an private registry" . The crwctl 2.15 management tool is available. See Section 3.3.1, "Installing the crwctl CLI management tool" . Procedure In all running workspaces in the CodeReady Workspaces 2.14 instance, save and push changes back to the Git repositories. Stop all workspaces in the CodeReady Workspaces 2.14 instance. Run the following command: <image-registry> : A hostname and a port of the container-image registry accessible in the restricted environment. <organization> : An organization of the container-image registry. See: Section 5.3.3, "Preparing an private registry" . Verification steps Navigate to the CodeReady Workspaces instance. The 2.15 version number is visible at the bottom of the page. Note For slow systems or internet connections, add the --k8spodwaittimeout=1800000 flag option to the crwctl server:update command to extend the Pod timeout period to 1800000 ms or longer. 5.4. Upgrading CodeReady Workspaces that uses project strategies other than 'per user' This section describes how to upgrade CodeReady Workspaces that uses project strategies other than 'per user'. CodeReady Workspaces intends to use Kubernetes secrets as a storage for all sensitive user data. One project per user simplifies the design of the workspaces. This is the reason why project strategies other than per user become deprecated. The deprecation process happens in two steps. In the First Step project strategies other than per user are allowed but not recommended. In the Second Step support for project strategies other than per user is going to be removed. Note No automated upgrade support exists between First Step and Second Step for the installations where project strategies other than per user are used without losing data. Prerequisites CodeReady Workspaces configured with the project strategies other than per user . Intention to use CodeReady Workspaces configured with the per user namespace strategies per user . 5.4.1. Upgrading CodeReady Workspaces and backing up user data Procedure Notify all CodeReady Workspaces users about the upcoming data wipe. Note To back up the data, you can commit workspace configuration to an SCM server and use factories to restore it later. Re-install CodeReady Workspaces with per user namespace strategy. 5.4.2. Upgrading CodeReady Workspaces and losing user data When CodeReady Workspaces is upgraded and user data is not backed up, workspace configuration and user preferences are going to be preserved but all runtime data will be wiped out. Procedure Notify all CodeReady Workspaces users about the upcoming data wipe. Change project strategy to per user . Note Upgrading without backing up user data has disadvantage. Original PVs with runtime data are going to be preserved and will no longer be used. This may lead to the waste of resources. Additional resources Section 4.2, "Configuring workspace target project" Chapter 3, Installing CodeReady Workspaces https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/end-user_guide/index#workspaces-overview.adoc https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/end-user_guide/index#importing-the-source-code-of-a-project-into-a-workspace.adoc 5.5. Rolling back a CodeReady Workspaces upgrade To restore CodeReady Workspaces to the pre-upgrade version, roll back the CodeReady Workspaces version upgrade as follows: Prerequisites Installed crwctl . Procedure Run the following command on a command line: Note CodeReady Workspaces Operator automatically creates a backup before every upgrade. | [
"crwctl server:update -n openshift-workspaces",
"git clone [email protected]:redhat-developer/codeready-workspaces.git cd codeready-workspaces git checkout crw-2.15-rhel-8",
"cd dependencies/che-devfile-registry ./build.sh --organization <my-org> --registry <my-registry> --tag <my-tag> --offline",
"git clone [email protected]:redhat-developer/codeready-workspaces.git cd codeready-workspaces git checkout crw-2.15-rhel-8",
"cd dependencies/che-plugin-registry ./build.sh --organization <my-org> --registry <my-registry> --tag <my-tag> --offline --skip-digest-generation",
"podman login --username <user> --password <password> <target-registry>",
"[registries.insecure] registries = [' <target-registry> ']",
"skopeo copy --all docker:// <source-image> docker:// <target-registry> / <target-organization> / <target-image>",
"skopeo inspect docker:// <source-image> skopeo inspect docker:// <target-registry> / <target-organization> / <target-image>",
"crwctl server:update --che-operator-image= <image-registry> / <organization> /crw-2-rhel8-operator:2.15 -n openshift-workspaces",
"crwctl server:restore --rollback"
]
| https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/installation_guide/upgrading-codeready-workspaces_crw |
Chapter 13. Change requests in Business Central | Chapter 13. Change requests in Business Central If you have more than one branch in a Business Central project and you make a change in a branch that you want to merge to another branch, you can create a change request. Any user with permission to view the target branch, usually the master branch, can see the change request. 13.1. Creating change requests You can create a change request in a Business Central project after you have made a change in your project, for example after you have added or deleted an attribute to an asset. Prerequisites You have more than one branch of a Business Central project. You made a change in one branch that you want to merge to another branch. Procedure In Business Central, go to Menu Design Projects and select the space and project that contains the change that you want to merge. On the project page, select the branch that contains the change. Figure 13.1. Select a branch menu Do one of the following tasks to submit the change request: Click in the upper-right corner of the screen and select Submit Change Request . Click the Change Requests tab and then click Submit Change Request . The Submit Change Request window appears. Enter a summary and a description, select the target branch, and click Submit . The target branch is the branch where the change will be merged. After you click Submit , the change request window appears. 13.2. Working with change requests You can view change requests for any branch that you have access to. You must have administrator permissions to accept a change request. Prerequisites You have more than one branch of a Business Central project. Procedure In Business Central, go to Menu Design Projects and select a space and project. On the project page, verify that you are on the correct branch. Click the Change Requests tab. A list of pending change requests appears. To filter change requests, select Open , Closed , or All to the left of the Search box. To search for specific change requests, enter an ID or text in the Search box and click the magnifying glass. To view the change request details, click the summary link. The change request window has two tabs: Review the Overview tab for general information about the change request. Click the Changed Files tab and expand a file to review the proposed changes. Click a button in the top right corner. Click Squash and Merge to squash all commits into a single commit and merge the commit to the target branch. Click Merge to merge the changes into the target branch. Click Reject to reject the changes and leave the target branch unchanged. Click Close to close the change request without rejecting or accepting it. Note that only the user who created the submitted the change request can close it. Click Cancel to return to the project window without making any changes. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/change-requests-con_managing-projects |
7.253. tcsh | 7.253. tcsh 7.253.1. RHBA-2013:0446 - tcsh bug fix update Updated tcsh packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The tcsh packages provide an enhanced and compatible version of the C shell (csh) command language interpreter, which can be used as an interactive login shell, as well as a shell script command processor. Bug Fixes BZ#769157 Prior to this update, the tcsh command language interpreter could run out of memory because of random "sbrk()" failures in the internal "malloc()" function. As a consequence, tcsh could abort with a segmentation fault. This update uses "system malloc" instead and tcsh no longer aborts. BZ#814069 Prior to this update, aliases were inserted into the history buffer when saving the history in loops if the alias included a statement that did not work in the loop. This update no longer allows to save the history in loops. Now, only the first line of loops and the "if" statement are saved in the history. Aliases now work as expected. BZ# 821796 Prior to this update, casting was removed when calling a function in the history file locking patch. As a consequence, multibyte tests failed. This update reverts the status before the patch and tests no longer fail. BZ# 847102 Prior to this update, the tcsh logic did not handle file sourcing as expected. As a consequence, source commands failed when using a single-line "if" statement. This update modifies the underlying code to handle source commands as expected. BZ# 884937 Prior to this update, the SIGINT signal was not blocked when the tcsh command language interpreter waited for the child process to finish. As a consequence, tcsh could be aborted with the key combination Ctrl+c. This update blocks the SIGINT signal and tcsh is no longer aborted. All users of tcsh are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/tcsh |
Chapter 8. Enabling accelerators | Chapter 8. Enabling accelerators Before you can use an accelerator in OpenShift AI, you must install the relevant software components. The installation process varies based on the accelerator type. Prerequisites You have logged in to your OpenShift cluster. You have the cluster-admin role in your OpenShift cluster. You have installed an accelerator and confirmed that it is detected in your environment. Procedure Follow the appropriate documentation to enable your accelerator: NVIDIA GPUs : See Enabling NVIDIA GPUs . Intel Gaudi AI accelerators : See Enabling Intel Gaudi AI accelerators . AMD GPUs : See Enabling AMD GPUs . After installing your accelerator, create an accelerator profile as described in: Working with accelerator profiles . Verification From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: The Operator for your accelerator Node Feature Discovery (NFD) Kernel Module Management (KMM) The accelerator is correctly detected a few minutes after full installation of the Node Feature Discovery (NFD) and the relevant accelerator Operator. The OpenShift command line interface (CLI) displays the appropriate output for the GPU worker node. For example, here is output confirming that an NVIDIA GPU is detected: | [
"Expected output when the accelerator is detected correctly describe node <node name> Capacity: cpu: 4 ephemeral-storage: 313981932Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16076568Ki nvidia.com/gpu: 1 pods: 250 Allocatable: cpu: 3920m ephemeral-storage: 288292006229 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 12828440Ki nvidia.com/gpu: 1 pods: 250"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/enabling-accelerators_install |
Chapter 6. ConsolePlugin [console.openshift.io/v1] | Chapter 6. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsolePluginSpec is the desired plugin configuration. 6.1.1. .spec Description ConsolePluginSpec is the desired plugin configuration. Type object Required backend displayName Property Type Description backend object backend holds the configuration of backend which is serving console's plugin . displayName string displayName is the display name of the plugin. The dispalyName should be between 1 and 128 characters. i18n object i18n is the configuration of plugin's localization resources. proxy array proxy is a list of proxies that describe various service type to which the plugin needs to connect to. proxy[] object ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. 6.1.2. .spec.backend Description backend holds the configuration of backend which is serving console's plugin . Type object Required type Property Type Description service object service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. type string type is the backend type which servers the console's plugin. Currently only "Service" is supported. 6.1.3. .spec.backend.service Description service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. Type object Required name namespace port Property Type Description basePath string basePath is the path to the plugin's assets. The primary asset it the manifest file called plugin-manifest.json , which is a JSON document that contains metadata about the plugin and the extensions. name string name of Service that is serving the plugin assets. namespace string namespace of Service that is serving the plugin assets. port integer port on which the Service that is serving the plugin is listening to. 6.1.4. .spec.i18n Description i18n is the configuration of plugin's localization resources. Type object Required loadType Property Type Description loadType string loadType indicates how the plugin's localization resource should be loaded. Valid values are Preload, Lazy and the empty string. When set to Preload, all localization resources are fetched when the plugin is loaded. When set to Lazy, localization resources are lazily loaded as and when they are required by the console. When omitted or set to the empty string, the behaviour is equivalent to Lazy type. 6.1.5. .spec.proxy Description proxy is a list of proxies that describe various service type to which the plugin needs to connect to. Type array 6.1.6. .spec.proxy[] Description ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. Type object Required alias endpoint Property Type Description alias string alias is a proxy name that identifies the plugin's proxy. An alias name should be unique per plugin. The console backend exposes following proxy endpoint: /api/proxy/plugin/<plugin-name>/<proxy-alias>/<request-path>?<optional-query-parameters> Request example path: /api/proxy/plugin/acm/search/pods?namespace=openshift-apiserver authorization string authorization provides information about authorization type, which the proxied request should contain caCertificate string caCertificate provides the cert authority certificate contents, in case the proxied Service is using custom service CA. By default, the service CA bundle provided by the service-ca operator is used. endpoint object endpoint provides information about endpoint to which the request is proxied to. 6.1.7. .spec.proxy[].endpoint Description endpoint provides information about endpoint to which the request is proxied to. Type object Required type Property Type Description service object service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. type string type is the type of the console plugin's proxy. Currently only "Service" is supported. 6.1.8. .spec.proxy[].endpoint.service Description service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. Type object Required name namespace port Property Type Description name string name of Service that the plugin needs to connect to. namespace string namespace of Service that the plugin needs to connect to port integer port on which the Service that the plugin needs to connect to is listening on. 6.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleplugins DELETE : delete collection of ConsolePlugin GET : list objects of kind ConsolePlugin POST : create a ConsolePlugin /apis/console.openshift.io/v1/consoleplugins/{name} DELETE : delete a ConsolePlugin GET : read the specified ConsolePlugin PATCH : partially update the specified ConsolePlugin PUT : replace the specified ConsolePlugin 6.2.1. /apis/console.openshift.io/v1/consoleplugins HTTP method DELETE Description delete collection of ConsolePlugin Table 6.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsolePlugin Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ConsolePluginList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsolePlugin Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 202 - Accepted ConsolePlugin schema 401 - Unauthorized Empty 6.2.2. /apis/console.openshift.io/v1/consoleplugins/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the ConsolePlugin HTTP method DELETE Description delete a ConsolePlugin Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsolePlugin Table 6.9. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsolePlugin Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsolePlugin Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/console_apis/consoleplugin-console-openshift-io-v1 |
Chapter 23. Additional resources | Chapter 23. Additional resources Designing business processes using BPMN models | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/additional_resources_2 |
4.4. Turning Off Local Transactions | 4.4. Turning Off Local Transactions In some cases, tools or frameworks above JBoss Data Virtualization will call setAutoCommit(false) , commit() and rollback() even when all access is read-only and no transactions are necessary. In the scope of a local transaction, JBoss Data Virtualization will start and attempt to commit an XA transaction, possibly complicating configuration or causing performance degradation. In these cases, you can override the default JDBC behavior to indicate that these methods should perform no action regardless of the commands being executed. To turn off the use of local transactions, add the following property to the JDBC connection URL: Warning Turning off local transactions can be dangerous and can result in inconsistent results when reading or inconsistent data in data stores when writing. For safety, this mode should be used only if you are certain that the calling application does not need local transactions. | [
"disableLocalTxn=true"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/turning_off_local_transactions1 |
3.4. Virtual Disk Performance Options | 3.4. Virtual Disk Performance Options Several virtual disk related options are available to your guest virtual machines during installation that can impact performance. The following image shows the virtual disk options available to your guests. The cache mode, IO mode, and IO tuning can be selected in the Virtual Disk section in virt-manager . Set these parameters in the fields under Performance options , as shown in the following image: Figure 3.8. Virtual Disk Performance Options Important When setting the virtual disk performance options in virt-manager , the virtual machine must be restarted for the settings to take effect. See Section 7.3, "Caching" and Section 7.4, "I/O Mode" for descriptions of these settings and instructions for editing these settings in the guest XML configuration. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-Virt_Manager-Virtual-Disk_Options |
Chapter 2. Builds | Chapter 2. Builds 2.1. Understanding image builds 2.1.1. Builds A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry. Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time. The OpenShift Container Platform build system provides extensible support for build strategies that are based on selectable types specified in the build API. There are three primary build strategies available: Docker build Source-to-image (S2I) build Custom build By default, docker builds and S2I builds are supported. The resulting object of a build depends on the builder used to create it. For docker and S2I builds, the resulting objects are runnable images. For custom builds, the resulting objects are whatever the builder image author has specified. Additionally, the pipeline build strategy can be used to implement sophisticated workflows: Continuous integration Continuous deployment 2.1.1.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 2.1.1.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 2.1.1.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 2.1.1.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 2.2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. For more information about each source type, see "Creating build inputs". 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook. 2.3. Creating build inputs Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets. 2.3.1. Build inputs A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Container Platform, listed in order of precedence: Inline Dockerfile definitions Content extracted from existing images Git repositories Binary (Local) inputs Input secrets External artifacts You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs. You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types. When you run a build: A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path. The build process changes directories into the contextDir , if one is defined. The inline Dockerfile, if any, is written to the current directory. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build. The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type. source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: "master" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4 1 The repository to be cloned into the working directory for the build. 2 /usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir . 3 The working directory for the build becomes <original_workingdir>/app/dir . 4 A Dockerfile with this content is created in <original_workingdir>/app/dir , overwriting any existing file with that name. 2.3.2. Dockerfile source When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile . This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content. The source definition is part of the spec section in the BuildConfig : source: dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1 1 The dockerfile field contains an inline Dockerfile that is built. Additional resources The typical use for this field is to provide a Dockerfile to a docker strategy build. 2.3.3. Image source You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context. The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination. Image inputs are specified in the source definition of the BuildConfig : source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: "master" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar 1 An array of one or more input images and files. 2 A reference to the image containing the files to be copied. 3 An array of source/destination paths. 4 The directory relative to the build root where the build process can access the file. 5 The location of the file to be copied out of the referenced image. 6 An optional secret provided if credentials are needed to access the input image. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Optionally, if an input image requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run: USD oc secrets link builder dockerhub Note This feature is not supported for builds using the custom strategy. 2.3.4. Git source When specified, source code is fetched from the supplied location. If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository. The source definition is part of the spec section in the BuildConfig : source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3 1 The git field contains the Uniform Resource Identifier (URI) to the remote Git repository of the source code. You must specify the value of the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. The default value of the ref field is master . 2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. 3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository. If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD . When no ref value is provided, OpenShift Container Platform performs a shallow clone ( --depth=1 ). In this case, only the files associated with the most recent commit on the default branch (typically master ) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example main ). Warning Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work. 2.3.4.1. Using a proxy If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field. Note Your source URI must use the HTTP or HTTPS protocol for this to work. source: git: uri: "https://github.com/openshift/ruby-hello-world" ref: "master" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com Note For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the BuildConfig . The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs. Additional resources You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy . 2.3.4.2. Source Clone Secrets Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. The following source clone secret configurations are supported: .gitconfig File Basic Authentication SSH Key Authentication Trusted Certificate Authorities Note You can also use combinations of these configurations to meet your specific needs. 2.3.4.2.1. Automatically adding a source clone secret to a build configuration When a BuildConfig is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration. To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri- . The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Container Platform automatically inserts a reference to that secret in the BuildConfig . Prerequisites A URI pattern must consist of: A valid scheme: *:// , git:// , http:// , https:// or ssh:// A host: *` or a valid hostname or IP address optionally preceded by *. A path: /* or / followed by any characters optionally including * characters In all of the above, a * character is interpreted as a wildcard. Important URI patterns must match Git source URIs which are conformant to RFC3986 . Do not include a username (or password) component in a URI pattern. For example, if you use ssh://[email protected]:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://[email protected]:7999/* ). USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*' Procedure If multiple secrets match the Git URI of a particular BuildConfig , OpenShift Container Platform selects the secret with the longest match. This allows for basic overriding, as in the following example. The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com : kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ... Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using: USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*' 2.3.4.2.2. Manually adding a source clone secret Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret . apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest" Procedure You can also use the oc set build-secret command to set the source clone secret on an existing build configuration. To set the source clone secret on an existing build configuration, enter the following command: USD oc set build-secret --source bc/sample-build basicsecret 2.3.4.2.3. Creating a secret from a .gitconfig file If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig . Procedure To create a secret from a .gitconfig file: USD oc create secret generic <secret_name> --from-file=<path/to/.gitconfig> Note SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file: [http] sslVerify=false 2.3.4.2.4. Creating a secret from a .gitconfig file for secured Git If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Prerequisites You must have Git credentials. Procedure Add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Add the client.crt , cacert.crt , and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code. In the .gitconfig file for the server, add the [http] section shown in the following example: # cat .gitconfig Example output [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt Create the secret: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key 1 The user's Git user name. 2 The password for this user. Important To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build. Additional resources /var/run/secrets/openshift.io/source/ folder in the application source code. 2.3.4.2.5. Creating a secret from source code basic authentication Basic authentication requires either a combination of --username and --password , or a token to authenticate against the software configuration management (SCM) server. Prerequisites User name and password to access the private repository. Procedure Create the secret first before using the --username and --password to access the private repository: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --type=kubernetes.io/basic-auth Create a basic authentication secret with a token: USD oc create secret generic <secret_name> \ --from-literal=password=<token> \ --type=kubernetes.io/basic-auth 2.3.4.2.6. Creating a secret from source code SSH key authentication SSH key based authentication requires a private SSH key. The repository keys are usually located in the USDHOME/.ssh/ directory, and are named id_dsa.pub , id_ecdsa.pub , id_ed25519.pub , or id_rsa.pub by default. Procedure Generate SSH key credentials: USD ssh-keygen -t ed25519 -C "[email protected]" Note Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank. Two files are created: the public key and a corresponding private key (one of id_dsa , id_ecdsa , id_ed25519 , or id_rsa ). With both of these in place, consult your source control management (SCM) system's manual on how to upload the public key. The private key is used to access your private repository. Before using the SSH key to access the private repository, create the secret: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/known_hosts> \ 1 --type=kubernetes.io/ssh-auth 1 Optional: Adding this field enables strict server host key check. Warning Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack. Note Ensure that the known_hosts file includes an entry for the host of your source code. 2.3.4.2.7. Creating a secret from source code trusted certificate authorities The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification. If you create a secret for the CA certificate, OpenShift Container Platform uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented. Procedure Create a secret with a CA certificate file. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command: USD cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt Create the secret: USD oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1 1 You must use the key name ca.crt . 2.3.4.2.8. Source secret combinations You can combine the different methods for creating source clone secrets for your specific needs. 2.3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file. Prerequisites SSH authentication .gitconfig file Procedure To create a SSH-based authentication secret with a .gitconfig file, run: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/.gitconfig> \ --type=kubernetes.io/ssh-auth 2.3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate. Prerequisites .gitconfig file CA certificate Procedure To create a secret that combines a .gitconfig file and CA certificate, run: USD oc create secret generic <secret_name> \ --from-file=ca.crt=<path/to/certificate> \ --from-file=<path/to/.gitconfig> 2.3.4.2.8.3. Creating a basic authentication secret with a CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate. Prerequisites Basic authentication credentials CA certificate Procedure Create a basic authentication secret with a CA certificate, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 2.3.4.2.8.4. Creating a basic authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and .gitconfig file. Prerequisites Basic authentication credentials .gitconfig file Procedure To create a basic authentication secret with a .gitconfig file, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --type=kubernetes.io/basic-auth 2.3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate. Prerequisites Basic authentication credentials .gitconfig file CA certificate Procedure To create a basic authentication secret with a .gitconfig file and CA certificate, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 2.3.5. Binary (local) source Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds. This source type is unique in that it is leveraged solely based on your use of the oc start-build . Note Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console. To utilize binary builds, invoke oc start-build with one of these options: --from-file : The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. --from-dir and --from-repo : The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir , you can also specify a URL to an archive, which is extracted. --from-archive : The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir ; an archive is created on your host first, whenever the argument to these options is a directory. In each of the previously listed cases: If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends. If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence. Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive . When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation. When using oc new-build --binary=true , the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary , meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data. The Dockerfile and contextDir source options have special meaning with binary builds. Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream. In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build. 2.3.6. Input secrets and config maps Important To prevent the contents of input secrets and config maps from appearing in build output container images, use build volumes in your Docker build and source-to-image build strategies. In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following: A settings.xml file configured with the mirror's URL and connection settings. A private key referenced in the settings file, such as ~/.ssh/id_rsa . For security reasons, you do not want to expose your credentials in the application image. This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more. 2.3.6.1. What is a secret? The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. YAML Secret Object Definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary. 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. 2.3.6.1.1. Properties of secrets Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. 2.3.6.1.2. Types of Secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/service-account-token . Uses a service account token. kubernetes.io/dockercfg . Uses the .dockercfg file for required Docker credentials. kubernetes.io/dockerconfigjson . Uses the .docker/config.json file for required Docker credentials. kubernetes.io/basic-auth . Use with basic authentication. kubernetes.io/ssh-auth . Use with SSH key authentication. kubernetes.io/tls . Use with TLS certificate authorities. Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. 2.3.6.1.3. Updates to secrets When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec . Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.3.6.2. Creating secrets You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file using a secret volume. Procedure Use the create command to create a secret object from a JSON or YAML file: USD oc create -f <filename> For example, you can create a secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This command generates a JSON specification of the secret named dockerhub and creates the object. YAML Opaque Secret Object Definition apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Docker Configuration JSON File Secret Object Definition apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a docker configuration JSON file. 2 The output of a base64-encoded the docker configuration JSON file 2.3.6.3. Using secrets After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod. Procedure Create the pod to reference your secret: USD oc create -f <your_yaml_file>.yaml Get the logs: USD oc logs secret-example-pod Delete the pod: USD oc delete pod secret-example-pod Additional resources Example YAML files with secret data: YAML Secret That Will Create Four Files apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB 1 File contains decoded values. 2 File contains decoded values. 3 File contains the provided string. 4 File contains the provided data. YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML of a Build Config Populating Environment Variables with Secret Data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 2.3.6.4. Adding input secrets and config maps To provide credentials and other configuration data to a build without placing them in source control, you can define input secrets and input config maps. In some scenarios, build operations require credentials or other configuration data to access dependent resources. To make that information available without placing it in source control, you can define input secrets and input config maps. Procedure To add an input secret, config maps, or both to an existing BuildConfig object: Create the ConfigMap object, if it does not exist: USD oc create configmap settings-mvn \ --from-file=settings.xml=<path/to/settings.xml> This creates a new config map named settings-mvn , which contains the plain text content of the settings.xml file. Tip You can alternatively apply the following YAML to create the config map: apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings> Create the Secret object, if it does not exist: USD oc create secret generic secret-mvn \ --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth This creates a new secret named secret-mvn , which contains the base64 encoded content of the id_rsa private key. Tip You can alternatively apply the following YAML to create the input secret: apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded Add the config map and secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn To include the secret and config map in a new BuildConfig object, run the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn" \ --build-config-map "settings-mvn" During the build, the settings.xml and id_rsa files are copied into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile . If you want to specify another directory, add a destinationDir to the definition: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: ".m2" secrets: - secret: name: secret-mvn destinationDir: ".ssh" You can also specify the destination directory when creating a new BuildConfig object: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn:.ssh" \ --build-config-map "settings-mvn:.m2" In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory. 2.3.6.5. Source-to-image strategy When using a Source strategy, all defined input secrets are copied to their respective destinationDir . If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur. Note Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons. Input config maps are not truncated after the assemble script completes. 2.3.6.6. Docker strategy When using a docker strategy, you can add all defined input secrets into your container image using the ADD and COPY instructions in your Dockerfile. If you do not specify the destinationDir for a secret, then the files are copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir , then the secrets are copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build. Example of a Dockerfile referencing secret and config map data Important Users normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets still exist in the image itself in the layer where they were added. This removal is part of the Dockerfile itself. To prevent the contents of input secrets and config maps from appearing in the build output container images and avoid this removal process altogether, use build volumes in your Docker build strategy instead. 2.3.6.7. Custom strategy When using a Custom strategy, all the defined input secrets and config maps are available in the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image must use these secrets and config maps appropriately. With the Custom strategy, you can define secrets as described in Custom strategy options. There is no technical difference between existing strategy secrets and the input secrets. However, your builder image can distinguish between them and use them differently, based on your build use case. The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory, or your builder can parse the USDBUILD environment variable, which includes the full build object. Important If a pull secret for the registry exists in both the namespace and the node, builds default to using the pull secret in the namespace. 2.3.7. External artifacts It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using. For a Source build strategy, you must put appropriate shell commands into the assemble script: .s2i/bin/assemble File #!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar .s2i/bin/run File #!/bin/sh exec java -jar app.jar For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction : Excerpt of Dockerfile FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ] In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig , rather than updating the Dockerfile or assemble script. You can choose between different methods of defining environment variables: Using the .s2i/environment file] (only for a Source build strategy) Setting in BuildConfig Providing explicitly using oc start-build --env (only for builds that are triggered manually) 2.3.8. Using docker credentials for private registries You can supply builds with a . docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication. You can supply credentials for multiple repositories within the same registry, each with credentials specific to that registry path. Note For the OpenShift Container Platform container image registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform. The .docker/config.json file is found in your home directory by default and has the following format: auths: index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "[email protected]" 3 docker.io/my-namespace/my-user/my-image: 4 auth: "GzhYWRGU6R2fbclabnRgbkSp="" email: "[email protected]" docker.io/my-namespace: 5 auth: "GzhYWRGU6R2deesfrRgbkSp="" email: "[email protected]" 1 URL of the registry. 2 Encrypted password. 3 Email address for the login. 4 URL and credentials for a specific image in a namespace. 5 URL and credentials for a registry namespace. You can define multiple container image registries or define multiple repositories in the same registry. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides Secret objects, which can be used to store configuration and passwords. Prerequisites You must have a .docker/config.json file. Procedure Create the secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This generates a JSON specification of the secret named dockerhub and creates the object. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the example is dockerhub : spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub" You can use the oc set build-secret command to set the push secret on the build configuration: USD oc set build-secret --push bc/sample-build dockerhub You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's output image. USD oc secrets link builder dockerhub Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition: strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub" You can use the oc set build-secret command to set the pull secret on the build configuration: USD oc set build-secret --pull bc/sample-build dockerhub Note This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds. You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, run: USD oc secrets link builder dockerhub Note You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations. 2.3.9. Build environments As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted. You can also manage environment variables defined in the BuildConfig with the oc set env command. Note Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created. 2.3.9.1. Using build fields as environment variables You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value. Note Jenkins Pipeline strategy does not support valueFrom syntax for environment variables. Procedure Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value: env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name 2.3.9.2. Using secrets as environment variables You can make key values from secrets available as environment variables using the valueFrom syntax. Important This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead. Procedure To use a secret as an environment variable, set the valueFrom syntax: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret Additional resources Input secrets and config maps 2.3.10. Service serving certificate secrets Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Procedure To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace. Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret. Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.3.11. Secrets restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespaces. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret . Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.4. Managing build output Use the following sections for an overview of and instructions for managing build output. 2.4.1. Build output Builds that use the docker or source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification. If the output kind is ImageStreamTag , then the image will be pushed to the integrated OpenShift image registry and tagged in the specified imagestream. If the output is of type DockerImage , then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build. Output to an ImageStreamTag spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" Output to a docker Push Specification spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag" 2.4.2. Output image environment variables docker and source-to-image (S2I) strategy builds set the following environment variables on output images: Variable Description OPENSHIFT_BUILD_NAME Name of the build OPENSHIFT_BUILD_NAMESPACE Namespace of the build OPENSHIFT_BUILD_SOURCE The source URL of the build OPENSHIFT_BUILD_REFERENCE The Git reference used in the build OPENSHIFT_BUILD_COMMIT Source commit used in the build Additionally, any user-defined environment variable, for example those configured with S2I] or docker strategy options, will also be part of the output image environment variable list. 2.4.3. Output image labels docker and source-to-image (S2I)` builds set the following labels on output images: Label Description io.openshift.build.commit.author Author of the source commit used in the build io.openshift.build.commit.date Date of the source commit used in the build io.openshift.build.commit.id Hash of the source commit used in the build io.openshift.build.commit.message Message of the source commit used in the build io.openshift.build.commit.ref Branch or reference specified in the source io.openshift.build.source-location Source URL for the build You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration. Custom Labels to be Applied to Built Images spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com" 2.5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 2.5.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 2.5.1.1. Replacing Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from of the BuildConfig . strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 2.5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure To use the dockerfilePath field for the build to use a different path to locate your Dockerfile, set: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 2.5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. Procedure The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 2.5.1.4. Adding docker build arguments You can set docker build arguments using the buildArgs array. The build arguments are passed to docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "foo" value: "bar" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 2.5.1.5. Squashing layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 2.5.1.6. Using build volumes You can mount build volumes to give running builds access to information that you don't want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs , whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object . Procedure In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and doesn't collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Optional. If true, this instructs the driver to provide a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Note The Shared Resource CSI Driver is supported as a Technology Preview feature. 2.5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 2.5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 2.5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, either: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 This path will have run , assemble , and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named scripts provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 2.5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image. Environment files and BuildConfig environment values. Variables provided will be present during the build process and in the output image. 2.5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 2.5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 2.5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 2.5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 2.5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 2.5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 2.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 2.5.2.6. Using build volumes You can mount build volumes to give running builds access to information that you don't want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs , whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object . Procedure In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and doesn't collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Optional. If true, this instructs the driver to provide a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Note The Shared Resource CSI Driver is supported as a Technology Preview feature. 2.5.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 2.5.3.1. Using FROM image for custom builds You can use the customStrategy.from section to indicate the image to use for the custom build Procedure Set the customStrategy.from section: strategy: customStrategy: from: kind: "DockerImage" name: "openshift/sti-image-builder" 2.5.3.2. Using secrets in custom builds In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod. Procedure To mount each secret at a specific location, edit the secretSource and mountPath fields of the strategy YAML file: strategy: customStrategy: secrets: - secretSource: 1 name: "secret1" mountPath: "/tmp/secret1" 2 - secretSource: name: "secret2" mountPath: "/tmp/secret2" 1 secretSource is a reference to a secret in the same namespace as the build. 2 mountPath is the path inside the custom builder where the secret should be mounted. 2.5.3.3. Using environment variables for custom builds To make environment variables available to the custom build process, you can add environment variables to the customStrategy definition of the build configuration. The environment variables defined there are passed to the pod that runs the custom build. Procedure Define a custom HTTP proxy to be used during build: customStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" To manage environment variables defined in the build configuration, enter the following command: USD oc set env <enter_variables> 2.5.3.4. Using custom builder images OpenShift Container Platform's custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy. A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images. Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests. 2.5.3.4.1. Custom builder image Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build: Table 2.2. Custom Builder Environment Variables Variable Name Description BUILD The entire serialized JSON of the Build object definition. If you must use a specific API version for serialization, you can set the buildAPIVersion parameter in the custom strategy specification of the build configuration. SOURCE_REPOSITORY The URL of a Git repository with source to be built. SOURCE_URI Uses the same value as SOURCE_REPOSITORY . Either can be used. SOURCE_CONTEXT_DIR Specifies the subdirectory of the Git repository to be used when building. Only present if defined. SOURCE_REF The Git reference to be built. ORIGIN_VERSION The version of the OpenShift Container Platform master that created this build object. OUTPUT_REGISTRY The container image registry to push the image to. OUTPUT_IMAGE The container image tag name for the image being built. PUSH_DOCKERCFG_PATH The path to the container registry credentials for running a podman push operation. 2.5.3.4.2. Custom builder workflow Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform: The Build object definition contains all the necessary information about input parameters for the build. Run the build process. If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables. 2.5.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 2.5.4.1. Understanding OpenShift Container Platform pipelines Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario. OpenShift Container Platform Jenkins Sync Plugin The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the OpenShift Container Platform web console. Integration with the Jenkins Git plugin, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plugin. Synchronization of secrets into Jenkins credential entries. OpenShift Container Platform Jenkins Client Plugin The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Container Platform Jenkins image. For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 2.5.4.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 2.5.4.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 2.5.4.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 2.5.4.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline After you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the OpenShift Container Platform DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your OpenShift Container Platform cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 2.5.5. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console: Create a new OpenShift Container Platform project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 2.5.6. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration. 2.6. Custom image builds with Buildah With OpenShift Container Platform 4.10, a docker socket will not be present on the host nodes. This means the mount docker socket option of a custom build is not guaranteed to provide an accessible docker socket for use within a custom build image. If you require this capability in order to build and push images, add the Buildah tool your custom build image and use it to build and push the image within your custom build logic. The following is an example of how to run custom builds with Buildah. Note Using the custom build strategy requires permissions that normal users do not have by default because it allows the user to execute arbitrary code inside a privileged container running on the cluster. This level of access can be used to compromise the cluster and therefore should be granted only to users who are trusted with administrative privileges on the cluster. 2.6.1. Prerequisites Review how to grant custom build permissions . 2.6.2. Creating custom build artifacts You must create the image you want to use as your custom build image. Procedure Starting with an empty directory, create a file named Dockerfile with the following content: FROM registry.redhat.io/rhel8/buildah # In this example, `/tmp/build` contains the inputs that build when this # custom builder image is run. Normally the custom builder image fetches # this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh # /usr/bin/build.sh contains the actual custom build logic that will be run when # this custom builder image is run. ENTRYPOINT ["/usr/bin/build.sh"] In the same directory, create a file named dockerfile.sample . This file is included in the custom build image and defines the image that is produced by the custom build: FROM registry.access.redhat.com/ubi8/ubi RUN touch /tmp/build In the same directory, create a file named build.sh . This file contains the logic that is run when the custom build runs: #!/bin/sh # Note that in this case the build inputs are part of the custom builder image, but normally this # is retrieved from an external source. cd /tmp/input # OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom # build framework TAG="USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}" # performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . # buildah requires a slight modification to the push secret provided by the service # account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo "{ \"auths\": " ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo "}") > /tmp/.dockercfg # push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG} 2.6.3. Build custom builder image You can use OpenShift Container Platform to build and push custom builder images to use in a custom strategy. Prerequisites Define all the inputs that will go into creating your new custom builder image. Procedure Define a BuildConfig object that will build your custom builder image: USD oc new-build --binary --strategy=docker --name custom-builder-image From the directory in which you created your custom build image, run the build: USD oc start-build custom-builder-image --from-dir . -F After the build completes, your new custom builder image is available in your project in an image stream tag that is named custom-builder-image:latest . 2.6.4. Use custom builder image You can define a BuildConfig object that uses the custom strategy in conjunction with your custom builder image to execute your custom build logic. Prerequisites Define all the required inputs for new custom builder image. Build your custom builder image. Procedure Create a file named buildconfig.yaml . This file defines the BuildConfig object that is created in your project and executed: kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest 1 Specify your project name. Create the BuildConfig : USD oc create -f buildconfig.yaml Create a file named imagestream.yaml . This file defines the image stream to which the build will push the image: kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {} Create the imagestream: USD oc create -f imagestream.yaml Run your custom build: USD oc start-build sample-custom-build -F When the build runs, it launches a pod running the custom builder image that was built earlier. The pod runs the build.sh logic that is defined as the entrypoint for the custom builder image. The build.sh logic invokes Buildah to build the dockerfile.sample that was embedded in the custom builder image, and then uses Buildah to push the new image to the sample-custom image stream . 2.7. Performing and configuring basic builds The following sections provide instructions for basic build operations, including starting and canceling builds, editing BuildConfigs , deleting BuildConfigs , viewing build details, and accessing build logs. 2.7.1. Starting a build You can manually start a new build from an existing build configuration in your current project. Procedure To manually start a build, enter the following command: USD oc start-build <buildconfig_name> 2.7.1.1. Re-running a build You can manually re-run a build using the --from-build flag. Procedure To manually re-run a build, enter the following command: USD oc start-build --from-build=<build_name> 2.7.1.2. Streaming build logs You can specify the --follow flag to stream the build's logs in stdout . Procedure To manually stream a build's logs in stdout , enter the following command: USD oc start-build <buildconfig_name> --follow 2.7.1.3. Setting environment variables when starting a build You can specify the --env flag to set any desired environment variable for the build. Procedure To specify a desired environment variable, enter the following command: USD oc start-build <buildconfig_name> --env=<key>=<value> 2.7.1.4. Starting a build with source Rather than relying on a Git source pull or a Dockerfile for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command: Option Description --from-dir=<directory> Specifies a directory that will be archived and used as a binary input for the build. --from-file=<file> Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. --from-repo=<local_source_repo> Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build. When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings. Note Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration. Procedure Start a build from a source using the following command to send the contents of a local Git repository as an archive from the tag v2 : USD oc start-build hello-world --from-repo=../hello-world --commit=v2 2.7.2. Canceling a build You can cancel a build using the web console, or with the following CLI command. Procedure To manually cancel a build, enter the following command: USD oc cancel-build <build_name> 2.7.2.1. Canceling multiple builds You can cancel multiple builds with the following CLI command. Procedure To manually cancel multiple builds, enter the following command: USD oc cancel-build <build1_name> <build2_name> <build3_name> 2.7.2.2. Canceling all builds You can cancel all builds from the build configuration with the following CLI command. Procedure To cancel all builds, enter the following command: USD oc cancel-build bc/<buildconfig_name> 2.7.2.3. Canceling all builds in a given state You can cancel all builds in a given state, such as new or pending , while ignoring the builds in other states. Procedure To cancel all in a given state, enter the following command: USD oc cancel-build bc/<buildconfig_name> 2.7.3. Editing a BuildConfig To edit your build configurations, you use the Edit BuildConfig option in the Builds view of the Developer perspective. You can use either of the following views to edit a BuildConfig : The Form view enables you to edit your BuildConfig using the standard form fields and checkboxes. The YAML view enables you to edit your BuildConfig with full control over the operations. You can switch between the Form view and YAML view without losing any data. The data in the Form view is transferred to the YAML view and vice versa. Procedure In the Builds view of the Developer perspective, click the menu to see the Edit BuildConfig option. Click Edit BuildConfig to see the Form view option. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. The URL is then validated. Optional: Click Show Advanced Git Options to add details such as: Git Reference to specify a branch, tag, or commit that contains code you want to use to build the application. Context Dir to specify the subdirectory that contains code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. In the Build from section, select the option that you would like to build from. You can use the following options: Image Stream tag references an image for a given image stream and tag. Enter the project, image stream, and tag of the location you would like to build from and push to. Image Stream image references an image for a given image stream and image name. Enter the image stream image you would like to build from. Also enter the project, image stream, and tag to push to. Docker image : The Docker image is referenced through a Docker image repository. You will also need to enter the project, image stream, and tag to refer to where you would like to push to. Optional: In the Environment Variables section, add the environment variables associated with the project by using the Name and Value fields. To add more environment variables, use Add Value , or Add from ConfigMap and Secret . Optional: To further customize your application, use the following advanced options: Trigger Triggers a new image build when the builder image changes. Add more triggers by clicking Add Trigger and selecting the Type and Secret . Secrets Adds secrets for your application. Add more secrets by clicking Add secret and selecting the Secret and Mount point . Policy Click Run policy to select the build run policy. The selected policy determines the order in which builds created from the build configuration must run. Hooks Select Run build hooks after image is built to run commands at the end of the build and verify the image. Add Hook type , Command , and Arguments to append to the command. Click Save to save the BuildConfig . 2.7.4. Deleting a BuildConfig You can delete a BuildConfig using the following command. Procedure To delete a BuildConfig , enter the following command: USD oc delete bc <BuildConfigName> This also deletes all builds that were instantiated from this BuildConfig . To delete a BuildConfig and keep the builds instatiated from the BuildConfig , specify the --cascade=false flag when you enter the following command: USD oc delete --cascade=false bc <BuildConfigName> 2.7.5. Viewing build details You can view build details with the web console or by using the oc describe CLI command. This displays information including: The build source. The build strategy. The output destination. Digest of the image in the destination registry. How the build was created. If the build uses the Docker or Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message. Procedure To view build details, enter the following command: USD oc describe build <build_name> 2.7.6. Accessing build logs You can access build logs using the web console or the CLI. Procedure To stream the logs using the build directly, enter the following command: USD oc describe build <build_name> 2.7.6.1. Accessing BuildConfig logs You can access BuildConfig logs using the web console or the CLI. Procedure To stream the logs of the latest build for a BuildConfig , enter the following command: USD oc logs -f bc/<buildconfig_name> 2.7.6.2. Accessing BuildConfig logs for a given version build You can access logs for a given version build for a BuildConfig using the web console or the CLI. Procedure To stream the logs for a given version build for a BuildConfig , enter the following command: USD oc logs --version=<number> bc/<buildconfig_name> 2.7.6.3. Enabling log verbosity You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig . Note An administrator can set the default build verbosity for the entire OpenShift Container Platform instance by configuring env/BUILD_LOGLEVEL . This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig . You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build . Available log levels for source builds are as follows: Level 0 Produces output from containers running the assemble script and all encountered errors. This is the default. Level 1 Produces basic information about the executed process. Level 2 Produces very detailed information about the executed process. Level 3 Produces very detailed information about the executed process, and a listing of the archive contents. Level 4 Currently produces the same information as level 3. Level 5 Produces everything mentioned on levels and additionally provides docker push messages. Procedure To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig : sourceStrategy: ... env: - name: "BUILD_LOGLEVEL" value: "2" 1 1 Adjust this value to the desired log level. 2.8. Triggering and modifying builds The following sections outline how to trigger builds and modify builds using build hooks. 2.8.1. Build triggers When defining a BuildConfig , you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available: Webhook Image change Configuration change 2.8.1.1. Webhook triggers Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Container Platform API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks. Currently, OpenShift Container Platform webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored. When the push events are processed, the OpenShift Container Platform control plane host confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig . If so, it then checks out the exact commit reference noted in the webhook event on the OpenShift Container Platform build. If they do not match, no build is triggered. Note oc new-app and oc new-build create GitHub and Generic webhook triggers automatically, but any other needed webhook triggers must be added manually. You can manually add triggers by setting triggers. For all webhooks, you must define a secret with a key named WebHookSecretKey and the value being the value to be supplied when invoking the webhook. The webhook definition must then reference the secret. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The value of the key is compared to the secret provided during the webhook invocation. For example here is a GitHub webhook with a reference to a secret named mysecret : type: "GitHub" github: secretReference: name: "mysecret" The secret is then defined as follows. Note that the value of the secret is base64 encoded as is required for any data field of a Secret object. - kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx 2.8.1.1.1. Using GitHub webhooks GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which is part of the URL you supply to GitHub when configuring the webhook. Example GitHub webhook definition: type: "GitHub" github: secretReference: name: "mysecret" Note The secret used in the webhook trigger configuration is not the same as secret field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature header. The payload URL is returned as the GitHub Webhook URL by the oc describe command (see Displaying Webhook URLs), and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Prerequisites Create a BuildConfig from a GitHub repository. Procedure To configure a GitHub Webhook: After creating a BuildConfig from a GitHub repository, run: USD oc describe bc/<name-of-your-BuildConfig> This generates a webhook GitHub URL that looks like: Example output <https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Cut and paste this URL into GitHub, from the GitHub web console. In your GitHub repository, select Add Webhook from Settings Webhooks . Paste the URL output into the Payload URL field. Change the Content Type from GitHub's default application/x-www-form-urlencoded to application/json . Click Add webhook . You should see a message from GitHub stating that your webhook was successfully configured. Now, when you push a change to your GitHub repository, a new build automatically starts, and upon a successful build a new deployment starts. Note Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it by your Gogs server as well. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github The -k argument is only necessary if your API server does not have a properly signed certificate. Note The build will only be triggered if the ref value from GitHub webhook event matches the ref value specified in the source.git field of the BuildConfig resource. Additional resources Gogs 2.8.1.1.2. Using GitLab webhooks GitLab webhooks handle the call made by GitLab when a repository is updated. As with the GitHub triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "GitLab" gitlab: secretReference: name: "mysecret" The payload URL is returned as the GitLab Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab Procedure To configure a GitLab Webhook: Describe the BuildConfig to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the GitLab setup instructions to paste the webhook URL into your GitLab repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitLab-Event: Push Hook" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab The -k argument is only necessary if your API server does not have a properly signed certificate. 2.8.1.1.3. Using Bitbucket webhooks Bitbucket webhooks handle the call made by Bitbucket when a repository is updated. Similar to the triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "Bitbucket" bitbucket: secretReference: name: "mysecret" The payload URL is returned as the Bitbucket Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket Procedure To configure a Bitbucket Webhook: Describe the 'BuildConfig' to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the Bitbucket setup instructions to paste the webhook URL into your Bitbucket repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-Event-Key: repo:push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket The -k argument is only necessary if your API server does not have a properly signed certificate. 2.8.1.1.4. Using generic webhooks Generic webhooks are invoked from any system capable of making a web request. As with the other webhooks, you must specify a secret, which is part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig : type: "Generic" generic: secretReference: name: "mysecret" allowEnv: true 1 1 Set to true to allow a generic webhook to pass in environment variables. Procedure To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The caller must invoke the webhook as a POST operation. To invoke the webhook manually you can use curl : USD curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The HTTP verb must be set to POST . The insecure -k flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates. The endpoint can accept an optional payload with the following format: git: uri: "<url to git repository>" ref: "<optional git reference>" commit: "<commit hash identifying a specific git commit>" author: name: "<author name>" email: "<author e-mail>" committer: name: "<committer name>" email: "<committer e-mail>" message: "<commit message>" env: 1 - name: "<variable name>" value: "<variable value>" 1 Similar to the BuildConfig environment variables, the environment variables defined here are made available to your build. If these variables collide with the BuildConfig environment variables, these variables take precedence. By default, environment variables passed by webhook are ignored. Set the allowEnv field to true on the webhook definition to enable this behavior. To pass this payload using curl , define it in a file named payload_file.yaml and run: USD curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The arguments are the same as the example with the addition of a header and a payload. The -H argument sets the Content-Type header to application/yaml or application/json depending on your payload format. The --data-binary argument is used to send a binary payload with newlines intact with the POST request. Note OpenShift Container Platform permits builds to be triggered by the generic webhook even if an invalid request payload is presented, for example, invalid content type, unparsable or invalid content, and so on. This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Container Platform returns a warning in JSON format as part of its HTTP 200 OK response. 2.8.1.1.5. Displaying webhook URLs You can use the following command to display webhook URLs associated with a build configuration. If the command does not display any webhook URLs, then no webhook trigger is defined for that build configuration. Procedure To display any webhook URLs associated with a BuildConfig , run: USD oc describe bc <name> 2.8.1.2. Using image change triggers As a developer, you can configure your build to run automatically every time a base image changes. You can use image change triggers to automatically invoke your build when a new version of an upstream image is available. For example, if a build is based on a RHEL image, you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image. Note Image streams that point to container images in v1 container registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 container registries. Procedure Define an ImageStream that points to the upstream image you want to use as a trigger: kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby-20-centos7" This defines the image stream that is tied to a container image repository located at <system-registry> / <namespace> /ruby-20-centos7 . The <system-registry> is defined as a service with the name docker-registry running in OpenShift Container Platform. If an image stream is the base image for the build, set the from field in the build strategy to point to the ImageStream : strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace. Define a build with one or more triggers that point to ImageStreams : type: "ImageChange" 1 imageChange: {} type: "ImageChange" 2 imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" 1 An image change trigger that monitors the ImageStream and Tag as defined by the build strategy's from field. The imageChange object here must be empty. 2 An image change trigger that monitors an arbitrary image stream. The imageChange part, in this case, must include a from field that references the ImageStreamTag to monitor. When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable docker tag that points to the latest image corresponding to that tag. This new image reference is used by the strategy when it executes for the build. For other image change triggers that do not reference the strategy image stream, a new build is started, but the build strategy is not updated with a unique image reference. Since this example has an image change trigger for the strategy, the resulting build is: strategy: sourceStrategy: from: kind: "DockerImage" name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>" This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs. You can pause an image change trigger to allow multiple changes on the referenced image stream before a build is started. You can also set the paused attribute to true when initially adding an ImageChangeTrigger to a BuildConfig to prevent a build from being immediately triggered. type: "ImageChange" imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" paused: true In addition to setting the image field for all Strategy types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist, then it is updated with the immutable image reference. If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid> resolved from the ImageStream referenced by the Strategy . This ensures that builds are performed using consistent image tags for ease of reproduction. Additional resources v1 container registries 2.8.1.3. Identifying the image change trigger of a build As a developer, if you have image change triggers, you can identify which image change initiated the last build. This can be useful for debugging or troubleshooting builds. Example BuildConfig apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: # ... triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: "2021-06-30T13:47:53Z" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1 Note This example omits elements that are not related to image change triggers. Prerequisites You have configured multiple image change triggers. These triggers have triggered one or more builds. Procedure In buildConfig.status.imageChangeTriggers to identify the lastTriggerTime that has the latest timestamp. This ImageChangeTriggerStatus Under imageChangeTriggers , compare timestamps to identify the latest Image change triggers In your build configuration, buildConfig.spec.triggers is an array of build trigger policies, BuildTriggerPolicy . Each BuildTriggerPolicy has a type field and set of pointers fields. Each pointer field corresponds to one of the allowed values for the type field. As such, you can only set BuildTriggerPolicy to only one pointer field. For image change triggers, the value of type is ImageChange . Then, the imageChange field is the pointer to an ImageChangeTrigger object, which has the following fields: lastTriggeredImageID : This field, which is not shown in the example, is deprecated in OpenShift Container Platform 4.8 and will be ignored in a future release. It contains the resolved image reference for the ImageStreamTag when the last build was triggered from this BuildConfig . paused : You can use this field, which is not shown in the example, to temporarily disable this particular image change trigger. from : You use this field to reference the ImageStreamTag that drives this image change trigger. Its type is the core Kubernetes type, OwnerReference . The from field has the following fields of note: kind : For image change triggers, the only supported value is ImageStreamTag . namespace : You use this field to specify the namespace of the ImageStreamTag . ** name : You use this field to specify the ImageStreamTag . Image change trigger status In your build configuration, buildConfig.status.imageChangeTriggers is an array of ImageChangeTriggerStatus elements. Each ImageChangeTriggerStatus element includes the from , lastTriggeredImageID , and lastTriggerTime elements shown in the preceding example. The ImageChangeTriggerStatus that has the most recent lastTriggerTime triggered the most recent build. You use its name and namespace to identify the image change trigger in buildConfig.spec.triggers that triggered the build. The lastTriggerTime with the most recent timestamp signifies the ImageChangeTriggerStatus of the last build. This ImageChangeTriggerStatus has the same name and namespace as the image change trigger in buildConfig.spec.triggers that triggered the build. Additional resources v1 container registries 2.8.1.4. Configuration change triggers A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created. The following is an example trigger definition YAML within the BuildConfig : type: "ConfigChange" Note Configuration change triggers currently only work when creating a new BuildConfig . In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated. 2.8.1.4.1. Setting triggers manually Triggers can be added to and removed from build configurations with oc set triggers . Procedure To set a GitHub webhook trigger on a build configuration, use: USD oc set triggers bc <name> --from-github To set an imagechange trigger, use: USD oc set triggers bc <name> --from-image='<image>' To remove a trigger, add --remove : USD oc set triggers bc <name> --from-bitbucket --remove Note When a webhook trigger already exists, adding it again regenerates the webhook secret. For more information, consult the help documentation with by running: USD oc set triggers --help 2.8.2. Build hooks Build hooks allow behavior to be injected into the build process. The postCommit field of a BuildConfig object runs commands inside a temporary container that is running the build output image. The hook is run immediately after the last layer of the image has been committed and before the image is pushed to a registry. The current working directory is set to the image's WORKDIR , which is the default working directory of the container image. For most images, this is where the source code is located. The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs. Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0 , the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log contains the output of the test runner, which can be used to identify failed tests. The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that running the hook cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and are not present in the final image. 2.8.2.1. Configuring post commit build hooks There are different ways to configure the post build hook. All forms in the following examples are equivalent and run bundle exec rake test --verbose . Procedure Shell script: postCommit: script: "bundle exec rake test --verbose" The script value is a shell script to be run with /bin/sh -ic . Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have /bin/sh , use command and/or args . Note The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release. Command as the image entry point: postCommit: command: ["/bin/bash", "-c", "bundle exec rake test --verbose"] In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference . This is needed if the image does not have /bin/sh , or if you do not want to use a shell. In all other cases, using script might be more convenient. Command with arguments: postCommit: command: ["bundle", "exec", "rake", "test"] args: ["--verbose"] This form is equivalent to appending the arguments to command . Note Providing both script and command simultaneously creates an invalid build hook. 2.8.2.2. Using the CLI to set post commit build hooks The oc set build-hook command can be used to set the build hook for a build configuration. Procedure To set a command as the post-commit build hook: USD oc set build-hook bc/mybc \ --post-commit \ --command \ -- bundle exec rake test --verbose To set a script as the post-commit build hook: USD oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose" 2.9. Performing advanced builds The following sections provide instructions for advanced build operations including setting build resources and maximum duration, assigning builds to nodes, chaining builds, build pruning, and build run policies. 2.9.1. Setting build resources By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited. Procedure You can limit resource use in two ways: Limit resource use by specifying resource limits in the default container limits of a project. Limit resource use by specifying resource limits as part of the build configuration. ** In the following example, each of the resources , cpu , and memory parameters are optional: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: resources: limits: cpu: "100m" 1 memory: "256Mi" 2 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : resources: requests: 1 cpu: "100m" memory: "256Mi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process. Otherwise, build pod creation will fail, citing a failure to satisfy quota. 2.9.2. Setting maximum duration When defining a BuildConfig object, you can define its maximum duration by setting the completionDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Container Platform. Procedure To set maximum duration, specify completionDeadlineSeconds in your BuildConfig . The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes: spec: completionDeadlineSeconds: 1800 Note This setting is not supported with the Pipeline Strategy option. 2.9.3. Assigning builds to specific nodes Builds can be targeted to run on specific nodes by specifying labels in the nodeSelector field of a build configuration. The nodeSelector value is a set of key-value pairs that are matched to Node labels when scheduling the build pod. The nodeSelector value can also be controlled by cluster-wide default and override values. Defaults will only be applied if the build configuration does not define any key-value pairs for the nodeSelector and also does not define an explicitly empty map value of nodeSelector:{} . Override values will replace values in the build configuration on a key by key basis. Note If the specified NodeSelector cannot be matched to a node with those labels, the build still stay in the Pending state indefinitely. Procedure Assign builds to run on specific nodes by assigning labels in the nodeSelector field of the BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: nodeSelector: 1 key1: value1 key2: value2 1 Builds associated with this build configuration will run only on nodes with the key1=value2 and key2=value2 labels. 2.9.4. Chained builds For compiled languages such as Go, C, C++, and Java, including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited. To avoid these problems, two builds can be chained together. One build that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact. In the following example, a source-to-image (S2I) build is combined with a docker build to compile an artifact that is then placed in a separate runtime image. Note Although this example chains a S2I build and a docker build, the first build can use any strategy that produces an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image. The first build takes the application source and produces an image containing a WAR file. The image is pushed to the artifact-image image stream. The path of the output artifact depends on the assemble script of the S2I builder used. In this case, it is output to /wildfly/standalone/deployments/ROOT.war . apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: "master" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift The second build uses image source with a path to the WAR file inside the output image from the first build. An inline dockerfile copies that WAR file into a runtime image. apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: "." strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange 1 from specifies that the docker build should include the output of the image from the artifact-image image stream, which was the target of the build. 2 paths specifies which paths from the target image to include in the current docker build. 3 The runtime image is used as the source image for the docker build. The result of this setup is that the output image of the second build does not have to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages. 2.9.5. Pruning builds By default, builds that have completed their lifecycle are persisted indefinitely. You can limit the number of builds that are retained. Procedure Limit the number of builds that are retained by supplying a positive integer value for successfulBuildsHistoryLimit or failedBuildsHistoryLimit in your BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2 1 successfulBuildsHistoryLimit will retain up to two builds with a status of completed . 2 failedBuildsHistoryLimit will retain up to two builds with a status of failed , canceled , or error . Trigger build pruning by one of the following actions: Updating a build configuration. Waiting for a build to complete its lifecycle. Builds are sorted by their creation timestamp with the oldest builds being pruned first. Note Administrators can manually prune builds using the 'oc adm' object pruning command. 2.9.6. Build run policy The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy field in the spec section of the Build specification. It is also possible to change the runPolicy value for existing build configurations, by: Changing Parallel to Serial or SerialLatestOnly and triggering a new build from this configuration causes the new build to wait until all parallel builds complete as the serial build can only run alone. Changing Serial to SerialLatestOnly and triggering a new build causes cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build runs . 2.10. Using Red Hat subscriptions in builds Use the following sections to run entitled builds on OpenShift Container Platform. 2.10.1. Creating an image stream tag for the Red Hat Universal Base Image To use Red Hat subscriptions within a build, you create an image stream tag to reference the Universal Base Image (UBI). To make the UBI available in every project in the cluster, you add the image stream tag to the openshift namespace. Otherwise, to make it available in a specific project , you add the image stream tag to that project. The benefit of using image stream tags this way is that doing so grants access to the UBI based on the registry.redhat.io credentials in the install pull secret without exposing the pull secret to other users. This is more convenient than requiring each developer to install pull secrets with registry.redhat.io credentials in each project. Procedure To create an ImageStreamTag in the openshift namespace, so it is available to developers in all projects, enter: USD oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest -n openshift Tip You can alternatively apply the following YAML to create an ImageStreamTag in the openshift namespace: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi8/ubi:latest name: latest referencePolicy: type: Source To create an ImageStreamTag in a single project, enter: USD oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest Tip You can alternatively apply the following YAML to create an ImageStreamTag in a single project: apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi8/ubi:latest name: latest referencePolicy: type: Source 2.10.2. Adding subscription entitlements as a build secret Builds that use Red Hat subscriptions to install content must include the entitlement keys as a build secret. Prerequisites You must have access to Red Hat entitlements through your subscription. The entitlement secret is automatically created by the Insights Operator. Tip When you perform an Entitlement Build using Red Hat Enterprise Linux (RHEL) 7, you must have the following instructions in your Dockerfile before you run any yum commands: RUN rm /etc/rhsm-host Procedure Add the etc-pki-entitlement secret as a build volume in the build configuration's Docker strategy: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 2.10.3. Running builds with Subscription Manager 2.10.3.1. Docker builds using Subscription Manager Docker strategy builds can use the Subscription Manager to install subscription content. Prerequisites The entitlement keys must be added as build strategy volumes. Procedure Use the following as an example Dockerfile to install content with the Subscription Manager: FROM registry.redhat.io/ubi8/ubi:latest RUN dnf search kernel-devel --showduplicates && \ dnf install -y kernel-devel 2.10.4. Running builds with Red Hat Satellite subscriptions 2.10.4.1. Adding Red Hat Satellite configurations to builds Builds that use Red Hat Satellite to install content must provide appropriate configurations to obtain content from Satellite repositories. Prerequisites You must provide or create a yum -compatible repository configuration file that downloads content from your Satellite instance. Sample repository configuration [test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem Procedure Create a ConfigMap containing the Satellite repository configuration file: USD oc create configmap yum-repos-d --from-file /path/to/satellite.repo Add the Satellite repository configuration and entitlement key as a build volumes: strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement 2.10.4.2. Docker builds using Red Hat Satellite subscriptions Docker strategy builds can use Red Hat Satellite repositories to install subscription content. Prerequisites You have added the entitlement keys and Satellite repository configurations as build volumes. Procedure Use the following as an example Dockerfile to install content with Satellite: FROM registry.redhat.io/ubi8/ubi:latest RUN dnf search kernel-devel --showduplicates && \ dnf install -y kernel-devel Additional resources How to use builds with Red Hat Satellite subscriptions and which certificate to use 2.10.5. Running entitled builds using SharedSecret objects You can configure and perform a build in one namespace that securely uses RHEL entitlements from a Secret object in another namespace. You can still access RHEL entitlements from OpenShift Builds by creating a Secret object with your subscription credentials in the same namespace as your Build object. However, now, in OpenShift Container Platform 4.10 and later, you can access your credentials and certificates from a Secret object in one of the OpenShift Container Platform system namespaces. You run entitled builds with a CSI volume mount of a SharedSecret custom resource (CR) instance that references the Secret object. This procedure relies on the newly introduced Shared Resources CSI Driver feature, which you can use to declare CSI Volume mounts in OpenShift Container Platform Builds. It also relies on the OpenShift Container Platform Insights Operator. Important The Shared Resources CSI Driver and The Build CSI Volumes are both Technology Preview features, which are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Shared Resources CSI Driver and the Build CSI Volumes features also belong to the TechPreviewNoUpgrade feature set, which is a subset of the current Technology Preview features. You can enable the TechPreviewNoUpgrade feature set on test clusters, where you can fully test them while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters. See "Enabling Technology Preview features using feature gates" in the following "Additional resources" section. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. You have a SharedSecret custom resource (CR) instance that references the Secret object where the Insights Operator stores the subscription credentials. You must have permission to perform the following actions: Create build configs and start builds. Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back. Determine if the builder service account available to you in your namespace is allowed to use the given SharedSecret CR instance. In other words, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the builder service account in your namespace is listed. Note If neither of the last two prerequisites in this list are met, establish, or ask someone to establish, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances. Procedure Grant the builder service account RBAC permissions to use the SharedSecret CR instance by using oc apply with YAML content: Note Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role ... to create the role needed for consuming SharedSecret CR instances. Example oc apply -f command with YAML Role object definition USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF Create the RoleBinding associated with the role by using the oc command: Example oc create rolebinding command USD oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder Create a BuildConfig object that accesses the RHEL entitlements. Example YAML BuildConfig object definition apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: my-csi-bc namespace: my-csi-app-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi8/ubi:latest RUN ls -la /etc/pki/entitlement RUN rm /etc/rhsm-host RUN yum repolist --disablerepo=* RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms RUN yum -y update RUN yum install -y openshift-clients.x86_64 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: "/etc/pki/entitlement" name: my-csi-shared-secret source: csi: driver: csi.sharedresource.openshift.io readOnly: true volumeAttributes: sharedSecret: my-share-bc type: CSI Start a build from the BuildConfig object and follow the logs with the oc command. Example oc start-build command USD oc start-build my-csi-bc -F Example 2.1. Example output from the oc start-build command Note Some sections of the following output have been replaced with ... build.build.openshift.io/my-csi-bc-1 started Caching blobs under "/var/cache/blobs". Pulling image registry.redhat.io/ubi8/ubi:latest ... Trying to pull registry.redhat.io/ubi8/ubi:latest... Getting image source signatures Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3 Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a Writing manifest to image destination Storing signatures Adding transient rw bind mount for /run/secrets/rhsm STEP 1/9: FROM registry.redhat.io/ubi8/ubi:latest STEP 2/9: RUN ls -la /etc/pki/entitlement total 360 drwxrwxrwt. 2 root root 80 Feb 3 20:28 . drwxr-xr-x. 10 root root 154 Jan 27 15:53 .. -rw-r--r--. 1 root root 3243 Feb 3 20:28 entitlement-key.pem -rw-r--r--. 1 root root 362540 Feb 3 20:28 entitlement.pem time="2022-02-03T20:28:32Z" level=warning msg="Adding metacopy option, configured globally" --> 1ef7c6d8c1a STEP 3/9: RUN rm /etc/rhsm-host time="2022-02-03T20:28:33Z" level=warning msg="Adding metacopy option, configured globally" --> b1c61f88b39 STEP 4/9: RUN yum repolist --disablerepo=* Updating Subscription Management repositories. ... --> b067f1d63eb STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system. time="2022-02-03T20:28:40Z" level=warning msg="Adding metacopy option, configured globally" --> 03927607ebd STEP 6/9: RUN yum -y update Updating Subscription Management repositories. ... Upgraded: systemd-239-51.el8_5.3.x86_64 systemd-libs-239-51.el8_5.3.x86_64 systemd-pam-239-51.el8_5.3.x86_64 Installed: diffutils-3.6-6.el8.x86_64 libxkbcommon-0.9.1-1.el8.x86_64 xkeyboard-config-2.28-1.el8.noarch Complete! time="2022-02-03T20:29:05Z" level=warning msg="Adding metacopy option, configured globally" --> db57e92ff63 STEP 7/9: RUN yum install -y openshift-clients.x86_64 Updating Subscription Management repositories. ... Installed: bash-completion-1:2.7-5.el8.noarch libpkgconf-1.4.2-1.el8.x86_64 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64 pkgconf-1.4.2-1.el8.x86_64 pkgconf-m4-1.4.2-1.el8.noarch pkgconf-pkg-config-1.4.2-1.el8.x86_64 Complete! time="2022-02-03T20:29:19Z" level=warning msg="Adding metacopy option, configured globally" --> 609507b059e STEP 8/9: ENV "OPENSHIFT_BUILD_NAME"="my-csi-bc-1" "OPENSHIFT_BUILD_NAMESPACE"="my-csi-app-namespace" --> cab2da3efc4 STEP 9/9: LABEL "io.openshift.build.name"="my-csi-bc-1" "io.openshift.build.namespace"="my-csi-app-namespace" COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca --> 821b582320b Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca 821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91 Build complete, no image push requested 2.10.6. Additional resources Importing simple content access certificates with Insights Operator Enabling features using feature gates Managing image streams build strategy 2.11. Securing builds by strategy Builds in OpenShift Container Platform are run in privileged containers. Depending on the build strategy used, if you have privileges, you can run builds to escalate their permissions on the cluster and host nodes. And as a security measure, it limits who can run builds and the strategy that is used for those builds. Custom builds are inherently less safe than source builds, because they can execute any code within a privileged container, and are disabled by default. Grant docker build permissions with caution, because a vulnerability in the Dockerfile processing logic could result in a privileges being granted on the host node. By default, all users that can create builds are granted permission to use the docker and Source-to-image (S2I) build strategies. Users with cluster administrator privileges can enable the custom build strategy, as referenced in the restricting build strategies to a user globally section. You can control who can build and which build strategies they can use by using an authorization policy. Each build strategy has a corresponding build subresource. A user must have permission to create a build and permission to create on the build strategy subresource to create builds using that strategy. Default roles are provided that grant the create permission on the build strategy subresource. Table 2.3. Build Strategy Subresources and Roles Strategy Subresource Role Docker builds/docker system:build-strategy-docker Source-to-Image builds/source system:build-strategy-source Custom builds/custom system:build-strategy-custom JenkinsPipeline builds/jenkinspipeline system:build-strategy-jenkinspipeline 2.11.1. Disabling access to a build strategy globally To prevent access to a particular build strategy globally, log in as a user with cluster administrator privileges, remove the corresponding role from the system:authenticated group, and apply the annotation rbac.authorization.kubernetes.io/autoupdate: "false" to protect them from changes between the API restarts. The following example shows disabling the docker build strategy. Procedure Apply the rbac.authorization.kubernetes.io/autoupdate annotation: USD oc edit clusterrolebinding system:build-strategy-docker-binding Example output apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" 1 creationTimestamp: 2018-08-10T01:24:14Z name: system:build-strategy-docker-binding resourceVersion: "225" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Abuild-strategy-docker-binding uid: 17b1f3d4-9c3c-11e8-be62-0800277d20bf roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:build-strategy-docker subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated 1 Change the rbac.authorization.kubernetes.io/autoupdate annotation's value to "false" . Remove the role: USD oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated Ensure the build strategy subresources are also removed from these roles: USD oc edit clusterrole admin USD oc edit clusterrole edit For each role, specify the subresources that correspond to the resource of the strategy to disable. Disable the docker Build Strategy for admin : kind: ClusterRole metadata: name: admin ... - apiGroups: - "" - build.openshift.io resources: - buildconfigs - buildconfigs/webhooks - builds/custom 1 - builds/source verbs: - create - delete - deletecollection - get - list - patch - update - watch ... 1 Add builds/custom and builds/source to disable docker builds globally for users with the admin role. 2.11.2. Restricting build strategies to users globally You can allow a set of specific users to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user. For example, to add the system:build-strategy-docker cluster role to the user devuser : USD oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser Warning Granting a user access at the cluster level to the builds/docker subresource means that the user can create builds with the docker strategy in any project in which they can create builds. 2.11.3. Restricting build strategies to a user within a project Similar to granting the build strategy role to a user globally, you can allow a set of specific users within a project to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user within a project. For example, to add the system:build-strategy-docker role within the project devproject to the user devuser : USD oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject 2.12. Build configuration resources Use the following procedure to configure build settings. 2.12.1. Build controller configuration parameters The build.config.openshift.io/cluster resource offers the following configuration parameters. Parameter Description Build Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . spec : Holds user-settable values for the build controller configuration. buildDefaults Controls the default information for builds. defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. You can override values by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the BuildConfig strategy. gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any proxy settings for all Git commands, such as git clone . Values that are not set here are inherited from DefaultProxy. env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . resources : Defines resource requirements to execute the build. ImageLabel name : Defines the name of the label. It must have non-zero length. buildOverrides Controls override settings for builds. imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. nodeSelector : A selector which must be true for the build pod to fit on a node. tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. BuildList items : Standard object's metadata. 2.12.2. Configuring build settings You can configure build settings by editing the build.config.openshift.io/cluster resource. Procedure Edit the build.config.openshift.io/cluster resource: USD oc edit build.config.openshift.io/cluster The following is an example build.config.openshift.io/cluster resource: apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 2 name: cluster resourceVersion: "107233" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists 1 Build : Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . 2 buildDefaults : Controls the default information for builds. 3 defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. 4 env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. 5 gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any Proxy settings for all Git commands, such as git clone . 6 imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . 7 resources : Defines resource requirements to execute the build. 8 buildOverrides : Controls override settings for builds. 9 imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. 10 nodeSelector : A selector which must be true for the build pod to fit on a node. 11 tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. 2.13. Troubleshooting builds Use the following to troubleshoot build issues. 2.13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 2.13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num : USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.14. Setting up additional trusted certificate authorities for builds Use the following sections to set up additional certificate authorities (CA) to be trusted by builds when pulling images from an image registry. The procedure requires a cluster administrator to create a ConfigMap and add additional CAs as keys in the ConfigMap . The ConfigMap must be created in the openshift-config namespace. domain is the key in the ConfigMap and value is the PEM-encoded certificate. Each CA must be associated with a domain. The domain format is hostname[..port] . The ConfigMap name must be set in the image.config.openshift.io/cluster cluster scoped configuration resource's spec.additionalTrustedCA field. 2.14.1. Adding certificate authorities to the cluster You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure. Prerequisites You must have cluster administrator privileges. You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory. Procedure Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the hostname of the registry in the hostname[..port] format: USD oc create configmap registry-cas -n openshift-config \ --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \ --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt Update the cluster image configuration: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge 2.14.2. Additional resources Create a ConfigMap Secrets and ConfigMaps Configuring a custom PKI | [
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4",
"source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar",
"oc secrets link builder dockerhub",
"source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3",
"source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'",
"kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"",
"oc set build-secret --source bc/sample-build basicsecret",
"oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>",
"[http] sslVerify=false",
"cat .gitconfig",
"[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt",
"oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth",
"ssh-keygen -t ed25519 -C \"[email protected]\"",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth",
"cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt",
"oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth",
"oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"oc create -f <filename>",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <your_yaml_file>.yaml",
"oc logs secret-example-pod",
"oc delete pod secret-example-pod",
"apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username",
"oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>",
"apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>",
"oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth",
"apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"",
"FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]",
"#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar",
"#!/bin/sh exec java -jar app.jar",
"FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]",
"auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"",
"oc set build-secret --push bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"",
"oc set build-secret --pull bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret",
"spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"",
"spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"",
"spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"",
"strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"",
"strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile",
"dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"dockerStrategy: buildArgs: - name: \"foo\" value: \"bar\"",
"strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers",
"spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1",
"sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"",
"strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"",
"customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"oc set env <enter_variables>",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1",
"jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"",
"oc project <project_name>",
"oc new-app jenkins-ephemeral 1",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline",
"def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }",
"oc create -f nodejs-sample-pipeline.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml",
"oc start-build nodejs-sample-pipeline",
"FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]",
"FROM registry.access.redhat.com/ubi8/ubi RUN touch /tmp/build",
"#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}",
"oc new-build --binary --strategy=docker --name custom-builder-image",
"oc start-build custom-builder-image --from-dir . -F",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest",
"oc create -f buildconfig.yaml",
"kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}",
"oc create -f imagestream.yaml",
"oc start-build sample-custom-build -F",
"oc start-build <buildconfig_name>",
"oc start-build --from-build=<build_name>",
"oc start-build <buildconfig_name> --follow",
"oc start-build <buildconfig_name> --env=<key>=<value>",
"oc start-build hello-world --from-repo=../hello-world --commit=v2",
"oc cancel-build <build_name>",
"oc cancel-build <build1_name> <build2_name> <build3_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc delete bc <BuildConfigName>",
"oc delete --cascade=false bc <BuildConfigName>",
"oc describe build <build_name>",
"oc describe build <build_name>",
"oc logs -f bc/<buildconfig_name>",
"oc logs --version=<number> bc/<buildconfig_name>",
"sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"oc describe bc/<name-of-your-BuildConfig>",
"<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"oc describe bc <name>",
"curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"oc describe bc <name>",
"curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"",
"curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"oc describe bc <name>",
"kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"",
"type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"",
"type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1",
"Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.",
"type: \"ConfigChange\"",
"oc set triggers bc <name> --from-github",
"oc set triggers bc <name> --from-image='<image>'",
"oc set triggers bc <name> --from-bitbucket --remove",
"oc set triggers --help",
"postCommit: script: \"bundle exec rake test --verbose\"",
"postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]",
"postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]",
"oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose",
"oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2",
"resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"",
"spec: completionDeadlineSeconds: 1800",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2",
"oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest -n openshift",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi8/ubi:latest name: latest referencePolicy: type: Source",
"oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi8/ubi:latest name: latest referencePolicy: type: Source",
"RUN rm /etc/rhsm-host",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi8/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel",
"[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem",
"oc create configmap yum-repos-d --from-file /path/to/satellite.repo",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi8/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF",
"oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: my-csi-bc namespace: my-csi-app-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi8/ubi:latest RUN ls -la /etc/pki/entitlement RUN rm /etc/rhsm-host RUN yum repolist --disablerepo=* RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms RUN yum -y update RUN yum install -y openshift-clients.x86_64 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: my-csi-shared-secret source: csi: driver: csi.sharedresource.openshift.io readOnly: true volumeAttributes: sharedSecret: my-share-bc type: CSI",
"oc start-build my-csi-bc -F",
"build.build.openshift.io/my-csi-bc-1 started Caching blobs under \"/var/cache/blobs\". Pulling image registry.redhat.io/ubi8/ubi:latest Trying to pull registry.redhat.io/ubi8/ubi:latest Getting image source signatures Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3 Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a Writing manifest to image destination Storing signatures Adding transient rw bind mount for /run/secrets/rhsm STEP 1/9: FROM registry.redhat.io/ubi8/ubi:latest STEP 2/9: RUN ls -la /etc/pki/entitlement total 360 drwxrwxrwt. 2 root root 80 Feb 3 20:28 . drwxr-xr-x. 10 root root 154 Jan 27 15:53 .. -rw-r--r--. 1 root root 3243 Feb 3 20:28 entitlement-key.pem -rw-r--r--. 1 root root 362540 Feb 3 20:28 entitlement.pem time=\"2022-02-03T20:28:32Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 1ef7c6d8c1a STEP 3/9: RUN rm /etc/rhsm-host time=\"2022-02-03T20:28:33Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> b1c61f88b39 STEP 4/9: RUN yum repolist --disablerepo=* Updating Subscription Management repositories. --> b067f1d63eb STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system. time=\"2022-02-03T20:28:40Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 03927607ebd STEP 6/9: RUN yum -y update Updating Subscription Management repositories. Upgraded: systemd-239-51.el8_5.3.x86_64 systemd-libs-239-51.el8_5.3.x86_64 systemd-pam-239-51.el8_5.3.x86_64 Installed: diffutils-3.6-6.el8.x86_64 libxkbcommon-0.9.1-1.el8.x86_64 xkeyboard-config-2.28-1.el8.noarch Complete! time=\"2022-02-03T20:29:05Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> db57e92ff63 STEP 7/9: RUN yum install -y openshift-clients.x86_64 Updating Subscription Management repositories. Installed: bash-completion-1:2.7-5.el8.noarch libpkgconf-1.4.2-1.el8.x86_64 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64 pkgconf-1.4.2-1.el8.x86_64 pkgconf-m4-1.4.2-1.el8.noarch pkgconf-pkg-config-1.4.2-1.el8.x86_64 Complete! time=\"2022-02-03T20:29:19Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 609507b059e STEP 8/9: ENV \"OPENSHIFT_BUILD_NAME\"=\"my-csi-bc-1\" \"OPENSHIFT_BUILD_NAMESPACE\"=\"my-csi-app-namespace\" --> cab2da3efc4 STEP 9/9: LABEL \"io.openshift.build.name\"=\"my-csi-bc-1\" \"io.openshift.build.namespace\"=\"my-csi-app-namespace\" COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca --> 821b582320b Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca 821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91 Build complete, no image push requested",
"oc edit clusterrolebinding system:build-strategy-docker-binding",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\" 1 creationTimestamp: 2018-08-10T01:24:14Z name: system:build-strategy-docker-binding resourceVersion: \"225\" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Abuild-strategy-docker-binding uid: 17b1f3d4-9c3c-11e8-be62-0800277d20bf roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:build-strategy-docker subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated",
"oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated",
"oc edit clusterrole admin",
"oc edit clusterrole edit",
"kind: ClusterRole metadata: name: admin - apiGroups: - \"\" - build.openshift.io resources: - buildconfigs - buildconfigs/webhooks - builds/custom 1 - builds/source verbs: - create - delete - deletecollection - get - list - patch - update - watch",
"oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser",
"oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject",
"oc edit build.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists",
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cicd/builds |
Chapter 12. kar | Chapter 12. kar 12.1. kar:create 12.1.1. Description Create a kar file for a list of feature repos 12.1.2. Syntax kar:create [options] repoName [features] 12.1.3. Arguments Name Description repoName Repository name. The kar will contain all features of the named repository by default features Names of the features to include. If set then only these features will be added 12.1.4. Options Name Description --help Display this help message 12.2. kar:install 12.2.1. Description Installs a KAR file. 12.2.2. Syntax kar:install [options] url 12.2.3. Arguments Name Description url The URL of the KAR file to install. 12.2.4. Options Name Description --help Display this help message --no-start Do not start the bundles automatically 12.3. kar:list 12.3.1. Description List the installed KAR files. 12.3.2. Syntax kar:list [options] 12.3.3. Options Name Description --help Display this help message --no-format Disable table rendered output 12.4. kar:uninstall 12.4.1. Description Uninstall a KAR file. 12.4.2. Syntax kar:uninstall [options] name 12.4.3. Arguments Name Description name The name of the KAR file to uninstall. 12.4.4. Options Name Description --help Display this help message | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/kar |
Chapter 9. Remote worker nodes on the network edge | Chapter 9. Remote worker nodes on the network edge 9.1. Using remote worker nodes at the network edge You can configure OpenShift Container Platform clusters with nodes located at your network edge. In this topic, they are called remote worker nodes . A typical cluster with remote worker nodes combines on-premise master and worker nodes with worker nodes in other locations that connect to the cluster. This topic is intended to provide guidance on best practices for using remote worker nodes and does not contain specific configuration details. There are multiple use cases across different industries, such as telecommunications, retail, manufacturing, and government, for using a deployment pattern with remote worker nodes. For example, you can separate and isolate your projects and workloads by combining the remote worker nodes into Kubernetes zones . However, having remote worker nodes can introduce higher latency, intermittent loss of network connectivity, and other issues. Among the challenges in a cluster with remote worker node are: Network separation : The OpenShift Container Platform control plane and the remote worker nodes must be able communicate with each other. Because of the distance between the control plane and the remote worker nodes, network issues could prevent this communication. See Network separation with remote worker nodes for information on how OpenShift Container Platform responds to network separation and for methods to diminish the impact to your cluster. Power outage : Because the control plane and remote worker nodes are in separate locations, a power outage at the remote location or at any point between the two can negatively impact your cluster. See Power loss on remote worker nodes for information on how OpenShift Container Platform responds to a node losing power and for methods to diminish the impact to your cluster. Latency spikes or temporary reduction in throughput : As with any network, any changes in network conditions between your cluster and the remote worker nodes can negatively impact your cluster. OpenShift Container Platform offers multiple worker latency profiles that let you control the reaction of the cluster to latency issues. Note the following limitations when planning a cluster with remote worker nodes: OpenShift Container Platform does not support remote worker nodes that use a different cloud provider than the on-premise cluster uses. Moving workloads from one Kubernetes zone to a different Kubernetes zone can be problematic due to system and environment issues, such as a specific type of memory not being available in a different zone. Proxies and firewalls can present additional limitations that are beyond the scope of this document. See the relevant OpenShift Container Platform documentation for how to address such limitations, such as Configuring your firewall . You are responsible for configuring and maintaining L2/L3-level network connectivity between the control plane and the network-edge nodes. 9.1.1. Adding remote worker nodes Adding remote worker nodes to a cluster involves some additional considerations. You must ensure that a route or a default gateway is in place to route traffic between the control plane and every remote worker node. You must place the Ingress VIP on the control plane. Adding remote worker nodes with user-provisioned infrastructure is identical to adding other worker nodes. To add remote worker nodes to an installer-provisioned cluster at install time, specify the subnet for each worker node in the install-config.yaml file before installation. There are no additional settings required for the DHCP server. You must use virtual media, because the remote worker nodes will not have access to the local provisioning network. To add remote worker nodes to an installer-provisioned cluster deployed with a provisioning network, ensure that virtualMediaViaExternalNetwork flag is set to true in the install-config.yaml file so that it will add the nodes using virtual media. Remote worker nodes will not have access to the local provisioning network. They must be deployed with virtual media rather than PXE. Additionally, specify each subnet for each group of remote worker nodes and the control plane nodes in the DHCP server. Additional resources Establishing communications between subnets Configuring host network interfaces for subnets Configuring network components to run on the control plane 9.1.2. Network separation with remote worker nodes All nodes send heartbeats to the Kubernetes Controller Manager Operator (kube controller) in the OpenShift Container Platform cluster every 10 seconds. If the cluster does not receive heartbeats from a node, OpenShift Container Platform responds using several default mechanisms. OpenShift Container Platform is designed to be resilient to network partitions and other disruptions. You can mitigate some of the more common disruptions, such as interruptions from software upgrades, network splits, and routing issues. Mitigation strategies include ensuring that pods on remote worker nodes request the correct amount of CPU and memory resources, configuring an appropriate replication policy, using redundancy across zones, and using Pod Disruption Budgets on workloads. If the kube controller loses contact with a node after a configured period, the node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition as Unknown . In response, the scheduler stops scheduling pods to that node. The on-premise node controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules pods on the node for eviction after five minutes, by default. If a workload controller, such as a Deployment object or StatefulSet object, is directing traffic to pods on the unhealthy node and other nodes can reach the cluster, OpenShift Container Platform routes the traffic away from the pods on the node. Nodes that cannot reach the cluster do not get updated with the new traffic routing. As a result, the workloads on those nodes might continue to attempt to reach the unhealthy node. You can mitigate the effects of connection loss by: using daemon sets to create pods that tolerate the taints using static pods that automatically restart if a node goes down using Kubernetes zones to control pod eviction configuring pod tolerations to delay or avoid pod eviction configuring the kubelet to control the timing of when it marks nodes as unhealthy. For more information on using these objects in a cluster with remote worker nodes, see About remote worker node strategies . 9.1.3. Power loss on remote worker nodes If a remote worker node loses power or restarts ungracefully, OpenShift Container Platform responds using several default mechanisms. If the Kubernetes Controller Manager Operator (kube controller) loses contact with a node after a configured period, the control plane updates the node health to Unhealthy and marks the node Ready condition as Unknown . In response, the scheduler stops scheduling pods to that node. The on-premise node controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules pods on the node for eviction after five minutes, by default. On the node, the pods must be restarted when the node recovers power and reconnects with the control plane. Note If you want the pods to restart immediately upon restart, use static pods. After the node restarts, the kubelet also restarts and attempts to restart the pods that were scheduled on the node. If the connection to the control plane takes longer than the default five minutes, the control plane cannot update the node health and remove the node.kubernetes.io/unreachable taint. On the node, the kubelet terminates any running pods. When these conditions are cleared, the scheduler can start scheduling pods to that node. You can mitigate the effects of power loss by: using daemon sets to create pods that tolerate the taints using static pods that automatically restart with a node configuring pods tolerations to delay or avoid pod eviction configuring the kubelet to control the timing of when the node controller marks nodes as unhealthy. For more information on using these objects in a cluster with remote worker nodes, see About remote worker node strategies . 9.1.4. Latency spikes or temporary reduction in throughput to remote workers If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. Additional resources Improving cluster stability in high latency environments using worker latency profiles 9.1.5. Remote worker node strategies If you use remote worker nodes, consider which objects to use to run your applications. It is recommended to use daemon sets or static pods based on the behavior you want in the event of network issues or power loss. In addition, you can use Kubernetes zones and tolerations to control or avoid pod evictions if the control plane cannot reach remote worker nodes. Daemon sets Daemon sets are the best approach to managing pods on remote worker nodes for the following reasons: Daemon sets do not typically need rescheduling behavior. If a node disconnects from the cluster, pods on the node can continue to run. OpenShift Container Platform does not change the state of daemon set pods, and leaves the pods in the state they last reported. For example, if a daemon set pod is in the Running state, when a node stops communicating, the pod keeps running and is assumed to be running by OpenShift Container Platform. Daemon set pods, by default, are created with NoExecute tolerations for the node.kubernetes.io/unreachable and node.kubernetes.io/not-ready taints with no tolerationSeconds value. These default values ensure that daemon set pods are never evicted if the control plane cannot reach a node. For example: Tolerations added to daemon set pods by default tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule Daemon sets can use labels to ensure that a workload runs on a matching worker node. You can use an OpenShift Container Platform service endpoint to load balance daemon set pods. Note Daemon sets do not schedule pods after a reboot of the node if OpenShift Container Platform cannot reach the node. Static pods If you want pods restart if a node reboots, after a power loss for example, consider static pods . The kubelet on a node automatically restarts static pods as node restarts. Note Static pods cannot use secrets and config maps. Kubernetes zones Kubernetes zones can slow down the rate or, in some cases, completely stop pod evictions. When the control plane cannot reach a node, the node controller, by default, applies node.kubernetes.io/unreachable taints and evicts pods at a rate of 0.1 nodes per second. However, in a cluster that uses Kubernetes zones, pod eviction behavior is altered. If a zone is fully disrupted, where all nodes in the zone have a Ready condition that is False or Unknown , the control plane does not apply the node.kubernetes.io/unreachable taint to the nodes in that zone. For partially disrupted zones, where more than 55% of the nodes have a False or Unknown condition, the pod eviction rate is reduced to 0.01 nodes per second. Nodes in smaller clusters, with fewer than 50 nodes, are not tainted. Your cluster must have more than three zones for these behavior to take effect. You assign a node to a specific zone by applying the topology.kubernetes.io/region label in the node specification. Sample node labels for Kubernetes zones kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east KubeletConfig objects You can adjust the amount of time that the kubelet checks the state of each node. To set the interval that affects the timing of when the on-premise node controller marks nodes with the Unhealthy or Unreachable condition, create a KubeletConfig object that contains the node-status-update-frequency and node-status-report-frequency parameters. The kubelet on each node determines the node status as defined by the node-status-update-frequency setting and reports that status to the cluster based on the node-status-report-frequency setting. By default, the kubelet determines the pod status every 10 seconds and reports the status every minute. However, if the node state changes, the kubelet reports the change to the cluster immediately. OpenShift Container Platform uses the node-status-report-frequency setting only when the Node Lease feature gate is enabled, which is the default state in OpenShift Container Platform clusters. If the Node Lease feature gate is disabled, the node reports its status based on the node-status-update-frequency setting. Example kubelet config apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - "10s" node-status-report-frequency: 3 - "1m" 1 Specify the type of node type to which this KubeletConfig object applies using the label from the MachineConfig object. 2 Specify the frequency that the kubelet checks the status of a node associated with this MachineConfig object. The default value is 10s . If you change this default, the node-status-report-frequency value is changed to the same value. 3 Specify the frequency that the kubelet reports the status of a node associated with this MachineConfig object. The default value is 1m . The node-status-update-frequency parameter works with the node-monitor-grace-period parameter. The node-monitor-grace-period parameter specifies how long OpenShift Container Platform waits after a node associated with a MachineConfig object is marked Unhealthy if the controller manager does not receive the node heartbeat. Workloads on the node continue to run after this time. If the remote worker node rejoins the cluster after node-monitor-grace-period expires, pods continue to run. New pods can be scheduled to that node. The node-monitor-grace-period interval is 40s . The node-status-update-frequency value must be lower than the node-monitor-grace-period value. Note Modifying the node-monitor-grace-period parameter is not supported. Tolerations You can use pod tolerations to mitigate the effects if the on-premise node controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to a node it cannot reach. A taint with the NoExecute effect affects pods that are running on the node in the following ways: Pods that do not tolerate the taint are queued for eviction. Pods that tolerate the taint without specifying a tolerationSeconds value in their toleration specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds value remain bound for the specified amount of time. After the time elapses, the pods are queued for eviction. Note Unless tolerations are explicitly set, Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , meaning that pods remain bound for 5 minutes if either of these taints is detected. You can delay or avoid pod eviction by configuring pods tolerations with the NoExecute effect for the node.kubernetes.io/unreachable and node.kubernetes.io/not-ready taints. Example toleration in a pod spec ... tolerations: - key: "node.kubernetes.io/unreachable" operator: "Exists" effect: "NoExecute" 1 - key: "node.kubernetes.io/not-ready" operator: "Exists" effect: "NoExecute" 2 tolerationSeconds: 600 3 ... 1 The NoExecute effect without tolerationSeconds lets pods remain forever if the control plane cannot reach the node. 2 The NoExecute effect with tolerationSeconds : 600 lets pods remain for 10 minutes if the control plane marks the node as Unhealthy . 3 You can specify your own tolerationSeconds value. Other types of OpenShift Container Platform objects You can use replica sets, deployments, and replication controllers. The scheduler can reschedule these pods onto other nodes after the node is disconnected for five minutes. Rescheduling onto other nodes can be beneficial for some workloads, such as REST APIs, where an administrator can guarantee a specific number of pods are running and accessible. Note When working with remote worker nodes, rescheduling pods on different nodes might not be acceptable if remote worker nodes are intended to be reserved for specific functions. stateful sets do not get restarted when there is an outage. The pods remain in the terminating state until the control plane can acknowledge that the pods are terminated. To avoid scheduling a to a node that does not have access to the same type of persistent storage, OpenShift Container Platform cannot migrate pods that require persistent volumes to other zones in the case of network separation. Additional resources For more information on Daemonesets, see DaemonSets . For more information on taints and tolerations, see Controlling pod placement using node taints . For more information on configuring KubeletConfig objects, see Creating a KubeletConfig CRD . For more information on replica sets, see ReplicaSets . For more information on deployments, see Deployments . For more information on replication controllers, see Replication controllers . For more information on the controller manager, see Kubernetes Controller Manager Operator . | [
"tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule",
"kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - \"10s\" node-status-report-frequency: 3 - \"1m\"",
"tolerations: - key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" 1 - key: \"node.kubernetes.io/not-ready\" operator: \"Exists\" effect: \"NoExecute\" 2 tolerationSeconds: 600 3"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/nodes/remote-worker-nodes-on-the-network-edge |
Chapter 2. Requirements | Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Red Hat certified hardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core x86_64 CPU. A quad core x86_64 CPU or multiple dual core x86_64 CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Red Hat Enterprise Linux 7.2 and later, and Windows 10. Note SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 8.6. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Find a certified solution . For more information on the requirements and limitations that apply to guests see Red Hat Enterprise Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere SandyBridge IvyBridge Haswell Broadwell Skylake Client Skylake Server Cascadelake Server IBM POWER8 POWER9 For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ# 1731395 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. Procedure At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in Red Hat Virtualization Host is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in Red Hat Virtualization Host is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements Red Hat Virtualization requires IPv6 to remain enabled on the physical or virtual machine running the Manager. Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.3. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager ( ovirt-imageio service) Required for communication with the ovirt-imageo service. Yes M8 6642 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see RHV: How to customize the Host's firewall rules? . Note A diagram of these firewall requirements is available at Red Hat Virtualization: Firewall Requirements Diagram . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager ovirt-imageio service Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ovirt-imageo service. Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No H14 123 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NTP Server NTP requests from ports above 1023 to port 123, and responses. This port is required and open by default. H15 4500 TCP, UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H16 500 UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H17 - AH, ESP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled . 2.3.6. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting after the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network MTU . | [
"grep -E 'svm|vmx' /proc/cpuinfo | grep nx"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/RHV_requirements |
Chapter 14. Logging alerts | Chapter 14. Logging alerts 14.1. Default logging alerts Logging alerts are installed as part of the Red Hat OpenShift Logging Operator installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to Enable Operator recommended cluster monitoring on this namespace when installing the Red Hat OpenShift Logging Operator. Default logging alerts are sent to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring namespace, unless you have disabled the local Alertmanager instance. 14.1.1. Accessing the Alerting UI from the Administrator perspective 14.1.2. Accessing the Alerting UI from the Developer perspective The Alerting UI is accessible through the Developer perspective of the OpenShift Container Platform web console. From the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. From the Developer perspective, go to Observe and go to the Alerts tab. Select the project that you want to manage alerts for from the Project: list. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts tab. The results shown in the Alerts tab are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you are not logged in as a cluster administrator. 14.1.3. Logging collector alerts In logging 5.8 and later versions, the following alerts are generated by the Red Hat OpenShift Logging Operator. You can view these alerts in the OpenShift Container Platform web console. Alert Name Message Description Severity CollectorNodeDown Prometheus could not scrape namespace / pod collector component for more than 10m. Collector cannot be scraped. Critical CollectorHighErrorRate value % of records have resulted in an error by namespace / pod collector component. namespace / pod collector component errors are high. Critical CollectorVeryHighErrorRate value % of records have resulted in an error by namespace / pod collector component. namespace / pod collector component errors are very high. Critical 14.1.4. Vector collector alerts In logging 5.7 and later versions, the following alerts are generated by the Vector collector. You can view these alerts in the OpenShift Container Platform web console. Table 14.1. Vector collector alerts Alert Message Description Severity CollectorHighErrorRate <value> of records have resulted in an error by vector <instance>. The number of vector output errors is high, by default more than 10 in the 15 minutes. Warning CollectorNodeDown Prometheus could not scrape vector <instance> for more than 10m. Vector is reporting that Prometheus could not scrape a specific Vector instance. Critical CollectorVeryHighErrorRate <value> of records have resulted in an error by vector <instance>. The number of Vector component errors are very high, by default more than 25 in the 15 minutes. Critical FluentdQueueLengthIncreasing In the last 1h, fluentd <instance> buffer queue length constantly increased more than 1. Current value is <value>. Fluentd is reporting that the queue size is increasing. Warning 14.1.5. Fluentd collector alerts The following alerts are generated by the legacy Fluentd log collector. You can view these alerts in the OpenShift Container Platform web console. Table 14.2. Fluentd collector alerts Alert Message Description Severity FluentDHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is high, by default more than 10 in the 15 minutes. Warning FluentdNodeDown Prometheus could not scrape fluentd <instance> for more than 10m. Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. Critical FluentdQueueLengthIncreasing In the last 1h, fluentd <instance> buffer queue length constantly increased more than 1. Current value is <value>. Fluentd is reporting that the queue size is increasing. Warning FluentDVeryHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is very high, by default more than 25 in the 15 minutes. Critical 14.1.6. Elasticsearch alerting rules You can view these alerting rules in the OpenShift Container Platform web console. Table 14.3. Alerting rules Alert Description Severity ElasticsearchClusterNotHealthy The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master node has not been elected yet. Critical ElasticsearchClusterNotHealthy The cluster health status has been YELLOW for at least 20 minutes. Some shard replicas are not allocated. Warning ElasticsearchDiskSpaceRunningLow The cluster is expected to be out of disk space within the 6 hours. Critical ElasticsearchHighFileDescriptorUsage The cluster is predicted to be out of file descriptors within the hour. Warning ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is high. Alert ElasticsearchNodeDiskWatermarkReached The specified node has hit the low watermark due to low free disk space. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. Info ElasticsearchNodeDiskWatermarkReached The specified node has hit the high watermark due to low free disk space. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. Warning ElasticsearchNodeDiskWatermarkReached The specified node has hit the flood watermark due to low free disk space. Every index that has a shard allocated on this node is enforced a read-only block. The index block must be manually released when the disk use falls below the high watermark. Critical ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is too high. Alert ElasticsearchWriteRequestsRejectionJumps Elasticsearch is experiencing an increase in write rejections on the specified node. This node might not be keeping up with the indexing speed. Warning AggregatedLoggingSystemCPUHigh The CPU used by the system on the specified node is too high. Alert ElasticsearchProcessCPUHigh The CPU used by Elasticsearch on the specified node is too high. Alert 14.1.7. Additional resources Modifying core platform alerting rules 14.2. Custom logging alerts In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. If you want to use customized alerting and recording rules , you must enable the LokiStack ruler component. LokiStack log-based alerts and recorded metrics are triggered by providing LogQL expressions to the ruler component. The Loki Operator manages a ruler that is optimized for the selected LokiStack size, which can be 1x.extra-small , 1x.small , or 1x.medium . To provide these expressions, you must create an AlertingRule custom resource (CR) containing Prometheus-compatible alerting rules , or a RecordingRule CR containing Prometheus-compatible recording rules . Administrators can configure log-based alerts or recorded metrics for application , audit , or infrastructure tenants. Users without administrator permissions can configure log-based alerts or recorded metrics for application tenants of the applications that they have access to. Application, audit, and infrastructure alerts are sent by default to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring namespace, unless you have disabled the local Alertmanager instance. If the Alertmanager that is used to monitor user-defined projects in the openshift-user-workload-monitoring namespace is enabled, application alerts are sent to the Alertmanager in this namespace by default. 14.2.1. Configuring the ruler When the LokiStack ruler component is enabled, users can define a group of LogQL expressions that trigger logging alerts or recorded metrics. Administrators can enable the ruler by modifying the LokiStack custom resource (CR). Prerequisites You have installed the Red Hat OpenShift Logging Operator and the Loki Operator. You have created a LokiStack CR. You have administrator permissions. Procedure Enable the ruler by ensuring that the LokiStack CR contains the following spec configuration: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: # ... rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: "true" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: "true" 3 1 Enable Loki alerting and recording rules in your cluster. 2 Add a custom label that can be added to namespaces where you want to enable the use of logging alerts and metrics. 3 Add a custom label that can be added to namespaces where you want to enable the use of logging alerts and metrics. 14.2.2. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. In logging 5.8 and later, the following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 14.2.2.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> Additional resources Using RBAC to define and apply permissions 14.2.3. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of above applies, an alerting rule is considered valid. Tenant type Valid namespaces for AlertingRule CRs application audit openshift-logging infrastructure openshift-/* , kube-/\* , default Prerequisites Red Hat OpenShift Logging Operator 5.7 and later OpenShift Container Platform 4.13 and later Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 14.2.4. Additional resources About OpenShift Container Platform monitoring Configuring alert notifications | [
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: \"true\" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: \"true\" 3",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/logging-alerts |
Chapter 26. Desktop | Chapter 26. Desktop Poppler no longer renders certain characters incorrectly Previously, the Poppler library did not map correctly to character code. As a consequence, Poppler showed the fi string instead of showing the correct glyph, or nothing, if the font did not contain necessary glyphs. With this update, the characters previously replaced with the fi string are shown correctly. (BZ# 1298616 ) Poppler no longer tries to access memory behind the array Memory corruption due to exceeding the length of array caused the Poppler library to terminate unexpectedly. A fix has been applied to not allow Poppler to try to access memory behind the array, and Poppler no longer crashes in the described situation. (BZ#1299506) pdftocairo no longer crashes when processing a PDF without group color space Previously, the Poppler library tried to access a non-existing object when processing a PDF without group color space. As a consequence, the Poppler library terminated unexpectedly with a segmentation fault. A patch has been applied to verify if group color space exists. As a result, Poppler no longer crashes, and the pdftocairo utility works as expected in the described situation. (BZ#1299479) Poppler no longer terminates unexpectedly during text extraction Previously, a writing after the end of the lines array could cause a memory corruption. As a consequence, the Poppler library could terminate unexpectedly. A patch has been applied and array is now always relocated when an item is added. As a result, Poppler no longer crashes in the described situation. (BZ#1299481) Poppler no longer terminates unexpectedly due to a missing GfxSeparationColorSpace class Previously, the Poppler library tried to copy a non-existing GfxSeparationColorSpace class and as a consequence terminated unexpectedly. With this update, Poppler now checks for existence of the GfxSeparationColorSpace class, and as a result no longer crashes in the described situation. (BZ#1299490) pdfinfo no longer terminates unexpectedly due to asserting broken encryption information Previously, Poppler tried to obtain broken encryption owner information. As a consequence, the pdfinfo utility to terminate unexpectedly. A fix has been applied to fix this bug, and Poppler no longer asserts broken encryption information. As a result, pdfinfo no longer crashes in the described situation. (BZ#1299500) Evince no longer crashes when viewing a PDF Previously, screen annotation and form fields passed a NULL pointer to _poppler_action_new , and Poppler created a false PopplerAction when viewing certaing PDFs in the Evince application. As a consequence, Evince terminated unexpectedly with a segmentation fault. A patch has been applied to modify _poppler_annot_scren_new and poppler_form_field_get_action to pass PopplerDocument instead of NULL. As a result, Evince no longer crashes in the described situation. (BZ#1299503) Virtual machines started by GNOME Boxes are no longer accessible to every user Previously, virtual machines started by GNOME Boxes were listening on a local TCP socket. As a consequence, any user could connect to any virtual machine started by another user. A patch has been applied and GNOME Boxes no longer opens such sockets by default. As a result the virtual machines are now accessible through SPICE only to the user who owns the virtual machine. (BZ# 1043950 ) GNOME boxes rebased to version 3.14.3.1 The GNOME boxes application has been updated to version 3.14.3.1. Most notably,a patch to one bug has been applied as a part of this rebase: Previously, the virtual network computing (VNC) authentication parameters in the GNOME boxes application were not handled correctly. As a consequence, the connections to VNC servers with authentication failed. This bug has been fixed and the connection to VNC servers with authentication now works as expected. (BZ#1015199) FreeRDP now recognizes wildcard certificates Previously, wildcard certificates support was not implemented in FreeRDP. As a consequence, wildcard certificates were not recognized by FreeRDP , and the following warning was displayed when connecting: Missing functionality has been backported from upstream and code for comparing host names was improved. As a result, the mentioned prompt is no longer shown if a valid wildcard certificate is used. (BZ#1275241) Important security updates now installed automatically Previously, it was not possible to have security updates installed automatically. Even though GNOME notified the users about the available updates, they could choose to ignore the notification and not install the update. As a consequence, important updates could be left uninstalled. A gnome-shell extension is now available to enforce the installation of important updates. As a result, when new updates are available, a dialog window notifies the user that updates will be applied and they need to save their work. After a configurable amount of time, the system reboots to install the pending updates. (BZ# 1302864 ) Accounts' shells in accountsservice now always verified The accountsservice package heuristics for determining disabled accounts changed between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. As a consequence, users with UID outside of the range 500 - 1000 would appear in the user list even if their shell was invalid. A patch has been applied to always verify the account's shell before the account is treated as a listable user account. As a result, the users with /sbin/nologin as a shell are now filtered out. (BZ#1341276) New way to handle desktop in Nautilus 3 Previously, icons in Nautilus 3 on the desktop were managed by taking the biggest monitor and trying to adapt the desktop window to the minimum common shape that would fit a rectangle. As a consequence, the icons could not be placed in random areas in some of the monitors, which could cause confusion for the user. This behavior has been changed to restrict the desktop window shape to the primary monitor. Even though this change does not allow to use all available monitors as part of the desktop, it fixes the described bug. (BZ#1207646) GLX support in Xvnc sessions The GLX support code in Xvnc requires the use of the libGL library. If a third-party driver was installed and replaced libGL, Xvnc sessions launched with no GLX support. Consequently, 3D applications did not work under Xvnc. With this update, Xvnc has been rebuilt to require libGL, which is assumed to be installed in /usr/lib64/ . Now, third-party drivers installed in a sub-directory no longer conflict with Xvnc, which now initializes GLX successfully. As a result, GLX functionality is available again in Xvnc sessions. Note that client applications connecting to Xvnc need to use the same libGL version as the Xvnc server, which may require the use of the LD_LIBRARY_PATH environment variable. (BZ#1326867) Flat document collections When using the gnome-documents application, it was possible include one collection into another and then vice versa at the same time. Consequently, the application terminated unexpectedly. This update ensures that the collections are flat and do not allow circular chains of collections, thus fixing this bug. (BZ# 958690 ) control-center no longer crashes when querying with special characters Previously, text entered by users when searching for a new printer required a specific character-set. Consequently, the control-center utility could terminate unexpectedly when searching for a printer name that contained a special character. With this update, the text is encoded into a valid ASCII format. As a result, control-center no longer crashes and correctly queries for printers. (BZ#1298952) gnome-control-center no longer crashes because of zero-length string Previously, the gnome-control-center utility worked with an empty string and an invalid pointer. As a consequence, it terminated unexpectedly. The gnome-control-center utility now checks whether the given application's identifier is at least 1 character long and initializes the new_app_ids pointer. As a result, the stated problem no longer occurs. (BZ#1298951) The Release Notes package is now installed correctly Previously, due to the naming of the Red Hat Enterprise Linux Release Notes packages, the packages were not installed on systems with a different language configured than English. This update provides additional parsing rules in the yum-languagepacks package. As a result, the Release Notes package is now installed correctly. (BZ#1263241) The LibreOffice language pack is now installed correctly for pt_BR , zh_CN , and zh_TW localizations Previously, translated libreoffice-langpack packages were not automatically installed on systems using language packs for the pt_BR , zh_CN , and zh_TW localizations. Parsing rules have been added to the yum language plug-in to address the problem. As a result, the correct LibreOffice language pack is installed. (BZ#1251388) | [
"WARNING: CERTIFICATE NAME MISMATCH!"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/bug_fixes_desktop |
9.9. Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) | 9.9. Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) When a cluster node shuts down, Pacemaker's default response is to stop all resources running on that node and recover them elsewhere, even if the shutdown is a clean shutdown. As of Red Hat Enterprise Linux 7.8, you can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node's resources to fail over to other nodes in the cluster. 9.9.1. Cluster Properties to Configure Resources to Remain Stopped on Clean Node Shutdown The ability to prevent resources from failing over on a clean node shutdown is implemented by means of the following cluster properties. shutdown-lock When this cluster property is set to the default value of false , the cluster will recover resources that are active on nodes being cleanly shut down. When this property is set to true , resources that are active on the nodes being cleanly shut down are unable to start elsewhere until they start on the node again after it rejoins the cluster. The shutdown-lock property will work for either cluster nodes or remote nodes, but not guest nodes. If shutdown-lock is set to true , you can remove the lock on one cluster resource when a node is down so that the resource can start elsewhere by performing a manual refresh on the node with the following command. Note that once the resources are unlocked, the cluster is free to move the resources elsewhere. You can control the likelihood of this occurring by using stickiness values or location preferences for the resource. Note A manual refresh will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. You can then perform a manual refresh on the remote node. shutdown-lock-limit When this cluster property is set to a time other than the default value of 0, resources will be available for recovery on other nodes if the node does not rejoin within the specified time since the shutdown was initiated. Note, however, that the time interval will not be checked any more often than the value of the cluster-recheck-interval cluster property. Note The shutdown-lock-limit property will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. After you run these commands, the resources that had been running on the remote node will be available for recovery on other nodes when the amount of time specified as the shutdown-lock-limit has passed. 9.9.2. Setting the shutdown-lock Cluster Property The following example sets the shutdown-lock cluster property to true in an example cluster and shows the effect this has when the node is shut down and started again. This example cluster consists of three nodes: z1.example.com , z2.example.com , and z3.example.com . Set the shutdown-lock property to to true and verify its value. In this example the shutdown-lock-limit property maintains its default value of 0. Check the status of the cluster. In this example, resources third and fifth are running on z1.example.com . Shut down z1.example.com , which will stop the resources that are running on that node. Running the pcs status command shows that node z1.example.com is offline and that the resources that had been running on z1.example.com are LOCKED while the node is down. Start cluster services again on z1.example.com so that it rejoins the cluster. Locked resources should get started on that node, although once they start they will not not necessarily remain on the same node. In this example, resouces third and fifth are recovered on node z1.example.com. | [
"pcs resource refresh resource --node node",
"pcs property set shutdown-lock=true pcs property list --all | grep shutdown-lock shutdown-lock: true shutdown-lock-limit: 0",
"pcs status Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z2.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com",
"pcs cluster stop z1.example.com Stopping Cluster (pacemaker) Stopping Cluster (corosync)",
"pcs status Node List: * Online: [ z2.example.com z3.example.com ] * OFFLINE: [ z1.example.com ] Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED) * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED)",
"pcs cluster start z1.example.com Starting Cluster",
"pcs status Node List: * Online: [ z1.example.com z2.example.com z3.example.com ] Full List of Resources: .. * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-shutdown-lock-HAAR |
Chapter 118. Migrating from NIS to Identity Management | Chapter 118. Migrating from NIS to Identity Management A Network Information Service (NIS) server can contain information about users, groups, hosts, netgroups and automount maps. As a system administrator you can migrate these entry types, authentication, and authorization from NIS server to an Identity Management (IdM) server so that all user management operations are performed on the IdM server. Migrating from NIS to IdM will also allow you access to more secure protocols such as Kerberos. 118.1. Enabling NIS in IdM To allow communication between NIS and Identity Management (IdM) server, you must enable NIS compatibility options on IdM server. Prerequisites You have root access on IdM server. Procedure Enable the NIS listener and compatibility plug-ins on IdM server: Optional: For a more strict firewall configuration, set a fixed port. For example, to set the port to unused port 514 : Warning To avoid conflict with other services do not use any port number above 1024. Enable and start the port mapper service: Restart Directory Server: 118.2. Migrating user entries from NIS to IdM The NIS passwd map contains information about users, such as names, UIDs, primary group, GECOS, shell, and home directory. Use this data to migrate NIS user accounts to Identity Management (IdM): Prerequisites You have root access on NIS server. NIS is enabled in IdM. The NIS server is enrolled into IdM. Procedure Install the yp-tools package: On the NIS server create the /root/nis-users.sh script with the following content: #!/bin/sh # USD1 is the NIS domain, USD2 is the primary NIS server ypcat -d USD1 -h USD2 passwd > /dev/shm/nis-map.passwd 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.passwd) ; do IFS=' ' username=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key uid=USD(echo USDline | cut -f3 -d:) gid=USD(echo USDline | cut -f4 -d:) gecos=USD(echo USDline | cut -f5 -d:) homedir=USD(echo USDline | cut -f6 -d:) shell=USD(echo USDline | cut -f7 -d:) # Now create this entry echo passw0rd1 | ipa user-add USDusername --first=NIS --last=USER \ --password --gidnumber=USDgid --uid=USDuid --gecos="USDgecos" --homedir=USDhomedir \ --shell=USDshell ipa user-show USDusername done Authenticate as the IdM admin user: Run the script. For example: Important This script uses hard-coded values for first name, last name, and sets the password to passw0rd1 . The user must change the temporary password at the login. 118.3. Migrating user group from NIS to IdM The NIS group map contains information about groups, such as group names, GIDs, or group members. Use this data to migrate NIS groups to Identity Management (IdM): Prerequisites You have root access on NIS server. NIS is enabled in IdM. The NIS server is enrolled into IdM. Procedure Install the yp-tools package: Create the /root/nis-groups.sh script with the following content on the NIS server: #!/bin/sh # USD1 is the NIS domain, USD2 is the primary NIS server ypcat -d USD1 -h USD2 group > /dev/shm/nis-map.group 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.group); do IFS=' ' groupname=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key gid=USD(echo USDline | cut -f3 -d:) members=USD(echo USDline | cut -f4 -d:) # Now create this entry ipa group-add USDgroupname --desc=NIS_GROUP_USDgroupname --gid=USDgid if [ -n "USDmembers" ]; then ipa group-add-member USDgroupname --users={USDmembers} fi ipa group-show USDgroupname done Authenticate as the IdM admin user: Run the script. For example: 118.4. Migrating host entries from NIS to IdM The NIS hosts map contains information about hosts, such as host names and IP addresses. Use this data to migrate NIS host entries to Identity Management (IdM): Note When you create a host group in IdM, a corresponding shadow NIS group is automatically created. Do not use the ipa netgroup-* commands on these shadow NIS groups. Use the ipa netgroup-* commands only to manage native netgroups created via the netgroup-add command. Prerequisites You have root access on NIS server. NIS is enabled in IdM. The NIS server is enrolled into IdM. Procedure Install the yp-tools package: Create the /root/nis-hosts.sh script with the following content on the NIS server: #!/bin/sh # USD1 is the NIS domain, USD2 is the primary NIS server ypcat -d USD1 -h USD2 hosts | egrep -v "localhost|127.0.0.1" > /dev/shm/nis-map.hosts 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.hosts); do IFS=' ' ipaddress=USD(echo USDline | awk '{print USD1}') hostname=USD(echo USDline | awk '{print USD2}') primary=USD(ipa env xmlrpc_uri | tr -d '[:space:]' | cut -f3 -d: | cut -f3 -d/) domain=USD(ipa env domain | tr -d '[:space:]' | cut -f2 -d:) if [ USD(echo USDhostname | grep "\." |wc -l) -eq 0 ] ; then hostname=USD(echo USDhostname.USDdomain) fi zone=USD(echo USDhostname | cut -f2- -d.) if [ USD(ipa dnszone-show USDzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add --name-server=USDprimary --admin-email=root.USDprimary fi ptrzone=USD(echo USDipaddress | awk -F. '{print USD3 "." USD2 "." USD1 ".in-addr.arpa."}') if [ USD(ipa dnszone-show USDptrzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add USDptrzone --name-server=USDprimary --admin-email=root.USDprimary fi # Now create this entry ipa host-add USDhostname --ip-address=USDipaddress ipa host-show USDhostname done Authenticate as the IdM admin user: Run the script. For example: Note This script does not migrate special host configurations, such as aliases. 118.5. Migrating netgroup entries from NIS to IdM The NIS netgroup map contains information about netgroups. Use this data to migrate NIS netgroups to Identity Management (IdM): Prerequisites You have root access on NIS server. NIS is enabled in IdM. The NIS server is enrolled into IdM. Procedure Install the yp-tools package: Create the /root/nis-netgroups.sh script with the following content on the NIS server: #!/bin/sh # USD1 is the NIS domain, USD2 is the primary NIS server ypcat -k -d USD1 -h USD2 netgroup > /dev/shm/nis-map.netgroup 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.netgroup); do IFS=' ' netgroupname=USD(echo USDline | awk '{print USD1}') triples=USD(echo USDline | sed "s/^USDnetgroupname //") echo "ipa netgroup-add USDnetgroupname --desc=NIS_NG_USDnetgroupname" if [ USD(echo USDline | grep "(," | wc -l) -gt 0 ]; then echo "ipa netgroup-mod USDnetgroupname --hostcat=all" fi if [ USD(echo USDline | grep ",," | wc -l) -gt 0 ]; then echo "ipa netgroup-mod USDnetgroupname --usercat=all" fi for triple in USDtriples; do triple=USD(echo USDtriple | sed -e 's/-//g' -e 's/(//' -e 's/)//') if [ USD(echo USDtriple | grep ",.*," | wc -l) -gt 0 ]; then hostname=USD(echo USDtriple | cut -f1 -d,) username=USD(echo USDtriple | cut -f2 -d,) domain=USD(echo USDtriple | cut -f3 -d,) hosts=""; users=""; doms=""; [ -n "USDhostname" ] && hosts="--hosts=USDhostname" [ -n "USDusername" ] && users="--users=USDusername" [ -n "USDdomain" ] && doms="--nisdomain=USDdomain" echo "ipa netgroup-add-member USDnetgroup USDhosts USDusers USDdoms" else netgroup=USDtriple echo "ipa netgroup-add USDnetgroup --desc=<NIS_NG>_USDnetgroup" fi done done Authenticate as the IdM admin user: Run the script. For example: 118.6. Migrating automount maps from NIS to IdM Automount maps are a series of nested and interrelated entries that define the location (the parent entry), the associated keys, and maps. To migrate NIS automount maps to Identity Management (IdM): Prerequisites You have root access on NIS server. NIS is enabled in IdM. The NIS server is enrolled into IdM. Procedure Install the yp-tools package: Create the /root/nis-automounts.sh script with the following content on the NIS server: #!/bin/sh # USD1 is for the automount entry in ipa ipa automountlocation-add USD1 # USD2 is the NIS domain, USD3 is the primary NIS server, USD4 is the map name ypcat -k -d USD2 -h USD3 USD4 > /dev/shm/nis-map.USD4 2>&1 ipa automountmap-add USD1 USD4 basedn=USD(ipa env basedn | tr -d '[:space:]' | cut -f2 -d:) cat > /tmp/amap.ldif <<EOF dn: nis-domain=USD2+nis-map=USD4,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: USD2 nis-map: USD4 nis-base: automountmapname=USD4,cn=USD1,cn=automount,USDbasedn nis-filter: (objectclass=\*) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} EOF ldapadd -x -h USD3 -D "cn=Directory Manager" -W -f /tmp/amap.ldif IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.USD4); do IFS=" " key=USD(echo "USDline" | awk '{print USD1}') info=USD(echo "USDline" | sed -e "s^USDkey[ \t]*") ipa automountkey-add nis USD4 --key="USDkey" --info="USDinfo" done Note The script exports the NIS automount information, generates an LDAP Data Interchange Format (LDIF) for the automount location and associated map, and imports the LDIF file into the IdM Directory Server. Authenticate as the IdM admin user: Run the script. For example: | [
"ipa-nis-manage enable ipa-compat-manage enable",
"ldapmodify -x -D 'cn=directory manager' -W dn: cn=NIS Server,cn=plugins,cn=config changetype: modify add: nsslapd-pluginarg0 nsslapd-pluginarg0: 514",
"systemctl enable rpcbind.service systemctl start rpcbind.service",
"systemctl restart dirsrv.target",
"yum install yp-tools -y",
"#!/bin/sh USD1 is the NIS domain, USD2 is the primary NIS server ypcat -d USD1 -h USD2 passwd > /dev/shm/nis-map.passwd 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.passwd) ; do IFS=' ' username=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key uid=USD(echo USDline | cut -f3 -d:) gid=USD(echo USDline | cut -f4 -d:) gecos=USD(echo USDline | cut -f5 -d:) homedir=USD(echo USDline | cut -f6 -d:) shell=USD(echo USDline | cut -f7 -d:) # Now create this entry echo passw0rd1 | ipa user-add USDusername --first=NIS --last=USER --password --gidnumber=USDgid --uid=USDuid --gecos=\"USDgecos\" --homedir=USDhomedir --shell=USDshell ipa user-show USDusername done",
"kinit admin",
"sh /root/nis-users.sh nisdomain nis-server.example.com",
"yum install yp-tools -y",
"#!/bin/sh USD1 is the NIS domain, USD2 is the primary NIS server ypcat -d USD1 -h USD2 group > /dev/shm/nis-map.group 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.group); do IFS=' ' groupname=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key gid=USD(echo USDline | cut -f3 -d:) members=USD(echo USDline | cut -f4 -d:) # Now create this entry ipa group-add USDgroupname --desc=NIS_GROUP_USDgroupname --gid=USDgid if [ -n \"USDmembers\" ]; then ipa group-add-member USDgroupname --users={USDmembers} fi ipa group-show USDgroupname done",
"kinit admin",
"sh /root/nis-groups.sh nisdomain nis-server.example.com",
"yum install yp-tools -y",
"#!/bin/sh USD1 is the NIS domain, USD2 is the primary NIS server ypcat -d USD1 -h USD2 hosts | egrep -v \"localhost|127.0.0.1\" > /dev/shm/nis-map.hosts 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.hosts); do IFS=' ' ipaddress=USD(echo USDline | awk '{print USD1}') hostname=USD(echo USDline | awk '{print USD2}') primary=USD(ipa env xmlrpc_uri | tr -d '[:space:]' | cut -f3 -d: | cut -f3 -d/) domain=USD(ipa env domain | tr -d '[:space:]' | cut -f2 -d:) if [ USD(echo USDhostname | grep \"\\.\" |wc -l) -eq 0 ] ; then hostname=USD(echo USDhostname.USDdomain) fi zone=USD(echo USDhostname | cut -f2- -d.) if [ USD(ipa dnszone-show USDzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add --name-server=USDprimary --admin-email=root.USDprimary fi ptrzone=USD(echo USDipaddress | awk -F. '{print USD3 \".\" USD2 \".\" USD1 \".in-addr.arpa.\"}') if [ USD(ipa dnszone-show USDptrzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add USDptrzone --name-server=USDprimary --admin-email=root.USDprimary fi # Now create this entry ipa host-add USDhostname --ip-address=USDipaddress ipa host-show USDhostname done",
"kinit admin",
"sh /root/nis-hosts.sh nisdomain nis-server.example.com",
"yum install yp-tools -y",
"#!/bin/sh USD1 is the NIS domain, USD2 is the primary NIS server ypcat -k -d USD1 -h USD2 netgroup > /dev/shm/nis-map.netgroup 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.netgroup); do IFS=' ' netgroupname=USD(echo USDline | awk '{print USD1}') triples=USD(echo USDline | sed \"s/^USDnetgroupname //\") echo \"ipa netgroup-add USDnetgroupname --desc=NIS_NG_USDnetgroupname\" if [ USD(echo USDline | grep \"(,\" | wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --hostcat=all\" fi if [ USD(echo USDline | grep \",,\" | wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --usercat=all\" fi for triple in USDtriples; do triple=USD(echo USDtriple | sed -e 's/-//g' -e 's/(//' -e 's/)//') if [ USD(echo USDtriple | grep \",.*,\" | wc -l) -gt 0 ]; then hostname=USD(echo USDtriple | cut -f1 -d,) username=USD(echo USDtriple | cut -f2 -d,) domain=USD(echo USDtriple | cut -f3 -d,) hosts=\"\"; users=\"\"; doms=\"\"; [ -n \"USDhostname\" ] && hosts=\"--hosts=USDhostname\" [ -n \"USDusername\" ] && users=\"--users=USDusername\" [ -n \"USDdomain\" ] && doms=\"--nisdomain=USDdomain\" echo \"ipa netgroup-add-member USDnetgroup USDhosts USDusers USDdoms\" else netgroup=USDtriple echo \"ipa netgroup-add USDnetgroup --desc=<NIS_NG>_USDnetgroup\" fi done done",
"kinit admin",
"sh /root/nis-netgroups.sh nisdomain nis-server.example.com",
"yum install yp-tools -y",
"#!/bin/sh USD1 is for the automount entry in ipa ipa automountlocation-add USD1 USD2 is the NIS domain, USD3 is the primary NIS server, USD4 is the map name ypcat -k -d USD2 -h USD3 USD4 > /dev/shm/nis-map.USD4 2>&1 ipa automountmap-add USD1 USD4 basedn=USD(ipa env basedn | tr -d '[:space:]' | cut -f2 -d:) cat > /tmp/amap.ldif <<EOF dn: nis-domain=USD2+nis-map=USD4,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: USD2 nis-map: USD4 nis-base: automountmapname=USD4,cn=USD1,cn=automount,USDbasedn nis-filter: (objectclass=\\*) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} EOF ldapadd -x -h USD3 -D \"cn=Directory Manager\" -W -f /tmp/amap.ldif IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.USD4); do IFS=\" \" key=USD(echo \"USDline\" | awk '{print USD1}') info=USD(echo \"USDline\" | sed -e \"s^USDkey[ \\t]*\") ipa automountkey-add nis USD4 --key=\"USDkey\" --info=\"USDinfo\" done",
"kinit admin",
"sh /root/nis-automounts.sh location nisdomain nis-server.example.com map_name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/assembly_migrating-from-nis-to-identity-management_configuring-and-managing-idm |
Chapter 23. Schema for Red Hat Quay configuration | Chapter 23. Schema for Red Hat Quay configuration Most Red Hat Quay configuration information is stored in the config.yaml file that is created using the browser-based config tool when Red Hat Quay is first deployed. All configuration options are described in the Red Hat Quay Configuration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/quay-schema |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.